Following up on my post regarding the need to upgrade data center infrastructure to capitalize on emerging cloud technologies, there is no question that local data systems will continue to play a key role as data environments become more distributed. But that doesn’t mean the enterprise data center will continue to exist in its present form, or that the systems and architectures that have served so well in the past will continue to provide optimal service in the future.
Storage is a key example. The traditional approach to storage was to invest in massive arrays of either disk or tape drives capable of providing not only adequate coverage for current “hot data” needs but long-term storage and archiving purposes, as well. The cloud has already upended that equation by providing virtually unlimited scale at relatively low cost, and just in time for the oncoming rush of Big Data that would have likely overwhelmed all but the largest local storage systems in the data center.
Even so, storage continues to be a top concern in the enterprise. As Evaluator Group’s John Webster points out on Forbes.com, investment is merely shifting away from the large arrays to server-side Flash and other solid-state deployments. With capacity readily available on the cloud, what the enterprise needs these days is speed, and pushing local Flash solutions closer to processing centers offers the twin benefits of increasing throughput and reducing the complexity of networking infrastructure.
Because of this, top data center platform providers are moving rapidly toward solid-state solutions. Note Cisco’s recent purchase of Whiptail, despite the tension that such a move raises with long-time partners like EMC and VMware. And as I discussed in my previous post, big storage vendors like EMC are starting to re-purpose their top-end platforms like ViPR and the VNX flash array in a bid to make them more relevant to the kinds of hyperscale storage environments that cloud providers favor.
All of this is a reflection of the fact that while the data center will continue to be an enterprise resource, it will no longer be the enterprise resource. In the drive to become leaner and meaner, organizations are apt to focus their data infrastructure investment on technologies that boost performance and drive productivity. Spend millions on bulk storage to house old data that can be stored cheaply in the cloud? Nope. Invest in relatively low-cost Flash storage and application acceleration technology to empower users in new and innovative ways? You bet. In the cloud, I/O is king and anything that can improve traffic flow and enhance the kind of shared, collaborative data environments that today’s knowledge worker prefers will receive top priority in the enterprise.
But even as this shift away from traditional storage gathers steam, we should not assume smooth sailing for Flash providers and other vendors touting cloud-ready infrastructure. Violin Memory, for example, sits at the heart of the intersection between cloud-readiness and infrastructure consolidation, but the company still had a poor IPO last week. The target price of $9 per share never materialized, and even the lower opening price of $7.41 gave further ground to close Friday at $7.02. The kind of revolutionary change that Flash technology brings to the enterprise tends to bring out stiff competition, so being an innovator in server-side memory technology is only the first step in upending long-standing business relationships.
The takeaway from my two latest posts, then, is that continued reinvestment in enterprise data center systems has not diminished in importance now that more workloads are finding their way to the cloud. But that doesn’t mean infrastructure investment strategies need to stay on autopilot either.
The cloud-ready enterprise will have a lot more tools at its disposal, and the data center needs to keep pace with these changes or find itself outclassed by lower-cost options elsewhere.