Everyone wants the latest, the greatest, the most cutting-edge technology. This is easy to do with a tablet or a smartphone but not when it’s an integrated enterprise data environment. In this circumstance, the only thing worse than falling behind the technological curve is throwing your processes out of whack with fork-lift upgrades.
This is why data infrastructure must evolve rather than change outright. Sometimes the evolution is quick and, yes, disruptive; other times it is slow, almost to the point where users don’t even know it’s happening. But overall, the change must be steady and purposeful or else the enterprise will find itself unable to compete in the emerging digital economy.
Sounds simple, right? It isn’t, of course. But even though each move must be weighed against broader architectural goals rather than simply adding more storage or compute power as in the past, there are still ways to break down the overall process into key steps while still maintaining the flexibility to alter the plan as needed.
When it comes to building out the new, high-performance data center, for example, many of today’s projects remain the same, says NetMagic’s Nilesh Rane. You’ll need to optimize server, storage and networking capabilities around maximum capacity and availability, but you’ll also need to incorporate higher degrees of automation to improve service levels and availability. At the same time, you should keep an eye on the four stages of data center evolution: basic functionality, consolidated infrastructure, high availability and strategic resource configuration. By mastering each successive stage without undermining the previous one, the enterprise will come out with a fully functional data environment capable of supporting the emerging business models of the digital age.
In many cases, even small changes can produce big rewards. Incorporating object storage, for instance, can put the enterprise on a path to a highly scalable, hybrid cloud infrastructure that can handle anything that Big Data and the Internet of Things can produce, says SwiftStack’s Mario Blandini. Through targeted deployments of object storage on standard server hardware, organizations open the door to advanced cloud APIs like AWS S3 or OpenStack Swift. From there, it becomes much easier to scale resources and implement the kinds of applications required to turn unstructured data into actionable information. And since this is not an “all or nothing” change, organizations don’t have to worry about disrupting legacy applications but can gradually move key workloads to the new architecture in a controlled fashion.
Evolution works two ways, of course. While the enterprise must evolve into the cloud, so too must platforms like Hadoop become more enterprise-friendly. According to Datanami’s Alex Woodie, Hadoop needs a serious make-over in areas like development, deployment and management if it hopes to form the backbone of enterprise analytics. Just the pace of change to key Hadoop distributions like Hortonworks is enough to give organizations pause at this point. To its credit, the company has launched a new release strategy, but at some point users simply want to get on with running their businesses, not revamping their platforms with every new addition. As well, many of the leading Hadoop platforms suffer from complicated user interfaces and a lack of data normalization.
Another key element in the evolution of the enterprise is the transition to the cloud, particularly when it comes to the migration of critical applications. This process is fraught with difficulty, as many CIOs well know at this point, but as iLand’s Monica Brink points out to Information Age, there are ways to manage it without falling behind your competitors. It helps to work the bugs out using non-critical workloads first, of course, but even when it comes to the important stuff, organizations might want to consider deploying cloud-facing applications at the outset and then gradually transition the data rather than simply shifting an entire ERP or CRM portfolio over at once. And at this point, most virtualization platforms have built-in cloud capabilities, allowing migrations to work hand-in-hand with resource consolidation and the adoption of software-defined architectures.
Enterprise data infrastructure has been evolving for many years, but it is only lately that this process has taken a new twist. Now, instead of simply building more of the same, the focus is on transitioning the fundamental nature of data resources in order to support an entirely new class of business model – one that is highly reliant on real-time, application-centric service delivery to up-end today’s product-focused strategy.
The need to change is urgent, but so is the need to change carefully and in a way that allows infrastructure to enable the business objective, not contain it.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.