Continuing with my survey of year-end predictions, there look to be a number of exciting developments in storage just around the corner. But unlike past eras, in which technology was the catalyst for new ways of working, it seems that the opposite is true today: Advances in broader data environments are driving the development of new storage techniques and capabilities.
Clearly, the cloud is a major factor here. As storage becomes a fungible commodity, available in any amount for those who can afford it, requirements are shifting from big, bulky platforms to more dispersed but flexible architectures that specialize in handling high numbers of small batch requests simultaneously. In that vein, storage will increasingly be rated not by how much data it can hold, but by how quickly it allows users to locate and retrieve information.
On Enterprise Storage Forum, Henry Newman, CEO of Instrumental Inc., offers up a 10-point list of likely events for the coming year, not all of which are positive. Topping the list is the likely delay of the PCIe-4 standard, meaning 100 GbE architectures will remain on 16-lane PCIe slots for a while longer, perhaps until 2016. And tape deployments are likely to stall given the density/performance limitations of LTO-6. But on the bright side, expect to see more T10 PI options for host-to-disk data protection, as well as a new non-volatile memory system suitable for advanced functions like database index tables.
From Hitachi Data Systems’ Hu Yoshida, we get word that both TCO concerns and the need to confront secondary data volumes generated by replication and backup functions will drive much of the storage market. At the same time, new storage services and advancing virtual technologies will drive down operational costs even as capital costs increase due to the need for more capacity and better functionality. And now that solid state storage has made significant inroads into enterprise environments, look for the addition of advanced flash controllers capable of improving durability, performance and capacity.
Flash memory itself should gain new impetus in the enterprise as well, according to Forbes’ Tom Coughlin, now that producers have figured out how to improve endurance through a high-heat manufacturing process. Macronix, for one, has said that by subjecting NAND flash cells to temperatures as high as 800 degrees C, integrity can be maintained up to 100 million write cycles, rather than the current MLC cap of about 10,000. This should also foster more rugged TLC designs that promise even greater capacity than current MLC designs.
And in the cloud, look for a concerted push to tailor its unique capabilities for mobility and increased data accessibility, according to data protection specialist Acronis. Citing IDC research that indicated cloud storage will be a $22.6 billion industry by 2015, Acronis expects pressure on the enterprise to do more with less to increase dramatically in the coming year, shifting the focus of cloud deployments from server scalability and hardware optimization to providing low-cost, manageable storage infrastructure. How else to accommodate the 2.5 quintillion bytes that are likely to be generated each day?
As I said, though, speed is likely to be the top requirement for storage systems going forward. There will always be a place to put your data somewhere in the enterprise or on the cloud. The question, though, is once it’s stored, can it be called up again in a way that both limits the demand on infrastructure and enhances productivity?