Storage infrastructure is under just as much pressure these days to conform to the new data paradigm as the rest of the data center. And to many, that means more emphasis on scale-out rather than scale-up solutions.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
In many respects, this change will occur as part of the larger shift toward converged, modular infrastructure, but within the storage component itself, the focus will have to shift from capacity to performance and, even more importantly, interoperability with increasingly diverse data infrastructure.
According to storage consultant Chris Evans, the advent of scale-out will not necessarily come at the expense of scale-up—both solutions have their roles to play in the age of Big Data and service-based, Web-facing enterprise functionality. But since so much of IT spend has gone toward large-capacity SAN and NAS infrastructure over the years, scale-out systems that can accommodate large-volume, multitenant applications quickly and cost-effectively will likely represent the key growth area going forward. At the same time, scale-out looks to be much friendlier maintenance-wise, as it won’t be necessary to take down entire terabytes worth of resources in order to isolate and repair a problem.
A quick look at all the recent M&A activity in the storage industry confirms the shift toward more scale-out infrastructure. Whether it’s Cisco picking up Whiptail, EMC with ScaleIO or Western Digital and Virident, the focus seems to be on modular, commodity solutions that jibe well with advancing virtual and software-defined data architectures. And in an age where capacity requirements can always be met with cloud-based or colo solutions, the need to improve storage speed and performance is supplanting the need for massive arrays within the enterprise data center.
An interesting twist on all of this is that even though Flash storage has upped the performance factor considerably, it is already seen by some as too slow for emerging data environments. After all, Flash still has to shuttle data through the PCIe bus, which itself is faster than the iSCSI/Fibre Channel port in the storage array, but not nearly as fast as the memory interface in the server. Diablo Technologies has been championing its Memory Channel Storage solutions for several years now, which can add multiple terabytes to a server at near DRAM speeds. In terms of scalability, the company provides 200 and 400 gigabyte models that can be integrated into the server or storage array as dictated by application requirements.
It’s also important to note that scale-out does not imply the end of large SAN and NAS systems. In fact, a number of arrays have hit the channel in recent months touting their scale-out acumen in line with the highly dynamic infrastructure playing out in the broader data ecosystem. NetApp’s FAS8000 appliance, for instance, features both SAN and NAS capabilities that tie into third-party storage environments, plus the Clustered Data ONTAP architecture that allows additional storage resources to be added to the overall storage pool in short order. As well, the the new Gemini X-Series from Nimbus Data Systems provides scale-out performance into the petabytes while incorporating unified SAN/NAS management and protection.
It’s fair to say, then, that rather than transitioning from scale-up to scale-out, storage environments are becoming more diverse as the enterprise looks to deploy a range of solutions to accommodate increasingly complex user requirements.
Part of this diversity will inevitably lead to third-party infrastructure in the cloud, but there will always be a need to keep data close to home where it can remain under careful watch even as it wends its way across divergent infrastructure and a wider variety of client devices.