Data loads are in a continuous state of flux, development teams are launching continuous integration/continuous development (CI/CD) projects, and the front office is clamoring for more power, more speed, more scalability and more, well, everything (except budgets, of course).https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iAnd what does IT have to work with to provide all of these agile capabilities? Static legacy infrastructure.
Clearly, this state of affairs cannot go on much longer, which is why leading data technology experts are urging organizations to place their internal infrastructure on a more aggressive upgrade trajectory or shift data loads to the cloud entirely.
According to Gartner’s Peter Sondergaard, the idea of putting IT on a “permanent state of upgrade” is not the stuff of science fiction anymore. In an address to the Gartner Symposium/ITxpo in Orland, Fla., covered by ciodive.com, Sondergaard said that modern CIOs have no choice but to adopt the “bimodal IT” model that Gartner has been preaching for several years if they hope to embrace emerging data architectures without disrupting current operations. In this way, they can accommodate both the planned and predictable change that characterizes traditional data models while embracing the experimental and disruptive change of the emerging digital economy.
But exactly what sort of infrastructure is required to embrace the dynamic workflows of tomorrow? To Kaminario CTO Shachar Fienblit, the most challenging upgrades will be in storage, which must improve not only in terms of speed and capacity but agility as well. To reach this state, the enterprise should adopt several key design philosophies for future deployments. These include the ability to maintain consistent performance across unpredictable workloads, independently scalable capacity and performance, a flexible architecture that can be more easily adjusted to new data paradigms than today’s rigid solutions, and a software-defined operational layer that can accommodate change without disrupting current operations.
This need for constant change is primarily what HPE has been going for with its composable infrastructure initiative. As the company’s Gary Thome points out, much of the enterprise workload will remain on local data infrastructure, for competitive, regulatory or other reasons, so it is imperative that it gains the same level of support that distributed, cloud-based applications receive. By building fluid pools of resources that can be provisioned under a unified API, the enterprise essentially delivers on the private cloud model to provide the scale, speed and flexibility required of even the most demanding services. In this way, the enterprise gains new levels of operational velocity and eliminates the silos and other weak points in the data chain that hamper functionality – all while boosting utilization and lowering overall costs. (Disclosure: I provide web content services for HPE.)
Another way to embrace the future is to supplement the centralized data center with distributed microcenters, says CIO Review. Not only does this push processing and data closer to users to reduce latency, it dramatically lessens the complexity of overall data infrastructure and lays the foundation for a modular, abstract data environment that can be managed and upgraded through remote software stacks rather than rip-and-replace hardware replacement. As companies like AOL have discovered, microcenters also provide a more stable data environment and are more conducive to the streaming content and rich media files that consumers and businesses are demanding.
The digital transition confronting the enterprise today conceals a minefield of potential problems, some of which can be truly devastating to current and future operations. Move too slowly and a more nimble competitor eats your lunch. Move too quickly and you might step on a mine.
This calls for the enterprise to exercise both haste and caution when deploying emerging technologies, with a deep understanding that every new system – whether it is hardware or software, infrastructure or architecture – must embrace change and adaptability as a core value.
The world is moving too quickly these days to remain locked in static infrastructure.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.