It’s enough to make a grown CIO cry. The CEO calls a meeting to announce that storage infrastructure needs to become leaner and meaner in an effort to reduce operating costs and increase efficiency. Oh, and he also wants to improve the organization’s ability to handle Big Data.
All too often, these directives are issued without a trace of irony.
But the fact remains that IT is being forced to serve two masters as it makes the transition from the static architectures of old to the more flexible, dynamic computing environments of the cloud. This has led to a number of radical innovations, particularly in storage architecture, which has long been last on the list when it comes to new technologies like virtualization. The driving force, after all, is precisely what the CEO wants: streamlined storage architecture that does not diminish, in fact enhances, the ability to handle ever-larger, increasingly complex storage volumes.
HP is the latest to take a crack at this, with a new line of 3PAR storage servers and related systems aimed at providing a single, converged architecture for mid-level and large operations. The package consists of the new StoreServ 7000, said to deliver top-tier performance at mid-range pricing, with features like deduplication and advanced analytics that improve search functions and Big Data modeling. As well, the company has a new pair of StoreOnce backup servers designed for multitenant functions common in cloud architectures, which should help enterprises consolidate their storage footprints as they become more cloud-like.
Even more significant, however, is the fact that HP has engineered the system to connect directly to its line of blade servers without going through a Fibre Channel SAN. The system uses the new Virtual Connect Direct-Attached Fibre Channel system to enable what the company calls a “flat SAN” capable of accessing up to 768 BladeSystem machines. Not only does this do away with a significant amount of network infrastructure, it speeds up the provisioning process some 2.5 times.
SAN-less architecture is not a new concept, however. Companies like Nutanix are already working on their third- and fourth-generation platforms in pursuit of the completely modular data center. The new Nutanix OS 3.0 software and NX-3000 series hardware system provide functions like dynamic cluster expansion and KVM hypervisor support designed to enable compute-heavy loads in single-cluster configurations. With a mix of PCIe and SATA SSDs and HDDs, plus varying memory capacities and a range of cores per socket, the system is designed to scale rapidly for a variety of needs, including Big Data.
Modularity may be more streamlined, but is it necessarily better when it comes to large data volumes? As Storage Switzerland’s George Crump notes, speed and scalability are not the only factors to consider. There is also data protection and reliability. If all the data on a SAN-less system is on the VM host, there is always the danger of a hardware failure, in which case data could be lost or, at best, unavailable while it and the necessary VMs are migrated to a new host. There are ways to minimize this danger, such as backup to a secondary host or shared storage device, although these will require at least a rudimentary network architecture, and thus, increased cost and complexity.
It seems, then, there are no easy answers when it comes to storage and the needs of Big Data. The latest technologies do, in fact, allow you to do more with less, but exactly how storage is to be designed to gain maximum benefit for increasingly complex data environments will be one of the top challenges for CEOs, CIOs and everyone else involved in enterprise infrastructure going forward.