Server-side Flash technology is about to make a big push for the hearts and minds of the enterprise, not merely as a convenient way to boost application speeds and data productivity but as a key underpinning of advanced virtual and cloud architectures.
The movement got a shot in the arm this week with IBM’s release of the new X series devices, led by the X6 that features what the company calls the first DIMM-based Flash option for an enterprise-class machine. The design is capable of putting 12.8 TB of high-speed storage about as close to the processor as you can get, which would go a long way toward not only improving database operations for things like Big Data analytics but, along with the device’s modular design, enabling the kind of hyperscale architectures suitable for the cloud. It also features self-healing CPU and memory systems, as well as IBM’s broad set of virtualization and systems management tools.
In essence, what we’re seeing with these new on-server Flash platforms is a new class of hardware that can best be described as the Server SAN, according to Wikibon’s Stu Miniman. By connecting Flash modules and even direct-attached storage (DAS) systems via high-speed connections, enterprises gain not only the high speed of solid-state storage but the ability to pool storage resources across a fabric-based architecture. Sophisticated software works to decouple capacity management from underlying hardware and can even optimize environments for particular workloads, in the end collapsing today’s data silos into a more streamlined and less costly data infrastructure.
This shift from centralized to distributed storage needs to be managed carefully, however, if the enterprise is to maintain data availability and reliability, says DCIG’s Jerome Wendt. In that vein, you’ll need to establish two crucial management functions before putting server-side Flash into production environments: optimized data placement on select servers and shared access to Flash memory across multiple devices. The former is intended to place data as close to designated workload processing centers as possible, while the latter maintains the flexibility needed to shift loads across various virtual or physical resources and to ensure that all data can be retrieved and delivered to multiple endpoints.
Indeed, when you consider what is happening not only to base storage technology, but to the interconnect, the CPU, memory components and a host of other factors, it appears that the storage industry is on the cusp of a radical make-over that will produce a dramatically different data center by the end of this decade, according to Enterprise Storage Forum’s Henry Newman. As the need for both scale and power efficiency influences infrastructure buying decisions, it is quite possible that many stalwart technologies like SAS, SATA, PCIe and even core processing technologies themselves will undergo significant changes in the next few years, or fade away entirely.
Server-side Flash is clearly poised to become one of the building blocks for the new modular data center, which in turn is likely to be the template for hyperscale, web-facing data environments. But even organizations that have yet to encounter the heavy data loads that cloud computing and mobile communications engender will still find integrated compute-storage-networking modules to be both easier to work with and cheaper than today’s legacy infrastructure.