Of all the pieces of data center infrastructure undergoing rapid change to the virtual/cloud future, the most unsettled at this point is storage. At the end of the process, servers and networking will still consist of the basic processor technology that has guided their development over the past three decades or so. But visions of the storage future are so wildly divergent – what with hard disks, solid state, in-memory solutions and any combination of SAN/NAS, SAS/SATA, PCIe/memory interface – that making any hard decisions right now is problematic at best.
The latest trend is server-side cache, which is seen as a convenient means to house high-speed critical data so it can be processed and sent on its way without having to navigate complicated storage networking infrastructure. The drawback, though, is that there is very little in the way of advanced storage features – things like deduplication, snapshots, automated recovery – for on-server solutions.
However, a new company called Maxta has come out with an all-software approach that allows server-side storage, including hard disk solutions, to meet the demands of increasingly virtualized server environments. The Maxta Storage Platform (MxSP) is hypervisor-agnostic and integrates with virtual platforms at the user interface, data management and other levels, essentially creating fully functional compute/storage modules out of existing servers without major changes to surrounding infrastructure.
Server-side storage management platforms have been around for a while, says eWeek’s Chris Preimesberger, but what makes Maxta unique is that it can provide the advanced features that SAN users have grown accustomed to – not just dedupe and snapshots, but cloning, compression and other tools as well. In this way, converged data environments can take advantage of highly dynamic storage pooling and resource allocation services while preserving all the functionality of a distributed SAN or NAS environment at only a fraction of the cost and complexity.
Similar solutions already populate enterprise channels, although they usually have a hardware component. Astute Network’s ViSX Flash appliance, for example, utilizes iSCSI connectivity to provide a plug-and-play solution to spin storage resources, particularly I/O performance, up to the level required by heavily virtualized server environments. Designed to complement existing SAN or NAS infrastructure, each unit scales up to 140,000 sustained random IOPS, and multiple devices can be clustered to push performance to one million IOPS or more, configurable as either a single large datastore or multiple smaller ones.
As well, Exablox is out with the OneBlox storage system, recently supplemented with Veeam’s Backup & Replication software to provide an easily deployable inline dedupe and encryption solution for virtualized environments. The package is designed to provide scale-out storage for small and medium-size enterprises that don’t have the resources to mount a full-blown storage architecture in-house but still need advanced functions like snapshots and built-in WAN acceleration. The device works with SAS and SATA drives, which can be added as needed and then tied to a global file system for automated pooling.
Data environments in general are becoming more flexible and dynamic across the board, which to many experts suggests that traditional SAN and NAS architectures will play limited roles in the data center going forward. Placing storage closer to processing centers is a highly efficient means to scale all compute resources in tandem and thus avoid the bottlenecks that arise when too many VMs seek access through too few communications ports. In fact, this is the basis for the emerging field of unified or converged infrastructure.
With advanced SAN-like functionality now making its way to near- and on-server storage solutions, it seems very likely that convergence will accelerate in the coming year.