In times of momentous change such as the enterprise is undergoing right now, it is easy to forget that most organizations are still trying to deal with some very mundane issues. Although it has largely dropped off the radar in the trade press, one of the most crucial is the ongoing integration of virtual technology into legacy data infrastructure.
Server virtualization, in particular, has progressed unabated to the point that it is now the common approach to hardware consolidation and the development of all the software-defined, cloud-ready architectures that are remaking the data center. And yet, we are still struggling with ways to implement virtualization on the server side without overloading resources elsewhere, namely storage.
This may seem odd, given that the public cloud provides virtually limitless storage for all manner of functions. But the fact remains that those who prefer to keep data in-house need to find innovative solutions to scale storage on par with servers and networking if they are to have any hope of maintaining on-premise infrastructure in support of private cloud deployments. Fortunately, storage can be ramped up in a virtual environment in a number of ways.
One is to start hosting virtual machines within the storage node itself, or alternately, to place storage within the VM host, according to Compuverde CEO Stefan Bernbo. This essentially creates a fully functional compute node that can be pooled and configured according to increasingly dynamic data loads. Normally, higher numbers of VMs jam up the one or maybe two pathways in and out of the SAN, introducing unbearably high levels of latency as server resources are scaled up. A more flattened architecture, on the other hand, will allow all resources—server, storage and networking—to scale evenly.
This approach is already showing up in new virtual storage platforms. Tintri, for example, recently released the VMstore T600 and the Tintri Global Center control platform, which together seek to incorporate traditional storage into advanced virtual environments. Top-end VMstore modules now support upwards of 2,000 VMs per module, providing sub-millisecond latency, while Global Center provides admin and oversight for up to 32 VMstores that can be dispersed across disparate virtual and cloud environments. In this way, the enterprise has the ability to scale full-compute resources across a worldwide virtual infrastructure.
The VMstore is technically a hybrid storage platform because it uses both Flash and high-capacity disk drives, even though nearly all of the I/O performance comes from the Flash side. As CIO.com points out, Flash is quickly becoming the technology-of-choice for organizations looking to scale up storage performance but not necessarily capacity in support of advanced virtual infrastructure. Data analysis firm Tecplot, for example, was having trouble with software development because its backend storage system couldn’t keep up with the rest of the company’s increasingly virtualized infrastructure. Rather than follow the traditional recipe of simply adding more storage, the company turned to PernixData to convert server-side Flash into a clustered acceleration tier so performance and capacity can be scaled independently. In the end, Tecplot was able to improve both read and write performance for virtual machines without doing a rip-and-replace of legacy storage infrastructure.
The enterprise could also take a lesson from the past when it comes to optimizing storage for virtual and cloud architectures, according to NetApp’s John Martin. As mainframe managers had discovered by the mid-1980s, continued reliance on logical unit numbers (LUNs) led to a management nightmare as systems were scaled up. With each LUN serving as its own little storage island, it wasn’t long before space failures, performance bottlenecks and job restarts/reruns became unbearable. The solution, then as now, was to ditch the LUN and implement a more modular, pool-friendly container approach.
Storage has always been the slowpoke of the enterprise, both in terms of performance and development. Advances like enterprise-class Flash storage come along once in a long while, and even then they fail to push speeds to the level of corresponding server and network technologies. And yet, infrastructure must remain in balance if the enterprise hopes to evolve from the static architectures of the past to advanced, dynamic virtual infrastructure.
So while exciting things are happening elsewhere in the data center, it’s really the storage farm that needs the most attention.