On-Server SSDs: Enterprise Minus the SAN?

Arthur Cole

The rapid advance of solid-state technology is set to remake more than just the storage architecture of the modern data center. At the pace things are going, it is entirely possible we could see a dramatic reduction in enterprise networking infrastructure in a few short years.

The portent of all this change is the increasing use of on-server SSD technology-either full-sized attached disk units or, increasingly, add-in cards and even board-level technology. At the moment, most of these deployments are limited in scope-a few gigs here and there to help out with system cache or provide rapid throughput for high-priority data.

But add even a rudimentary connection to these devices, say, the all-but-complete 8 GTps PCIe 3.0 format, and you have all the makings of a close-knit, lightning-fast storage array that can be pooled and scaled to match all but the largest storage systems. And this without the tremendous expense of provisioning and maintaining a full-blown SAN.

Check out these examples to see what I mean:

STEC just launched a MACH16 SSD that delivers up to 400 GB and a sustained transfer rate of 30,000 IOPS using 4k blocks. The line features a new ASIC control mechanism that allows you to share the drive among multiple servers and provides enhanced data protection through the company's Secure Array of Flash Element (SAFE) system.

An even more innovative approach comes from Viking Modular Solutions' new SATADIMM device, which places up to 200 GB on a DDR3 DIMM form factor for installation on just about any server or PC memory slot. The drive delivers sustained read/write of 260 MBps and sequential and random performance of 30,000 IOPS. It also comes with AES-128 encryption and both SMART command and TRIM support. Networking is handled through a SATA II interface. For those of you still stuck on the idea that a disk drive must be 2.5 inches, the company provides the new 400 GB Elemental SAS device.

And it's not like the server vendors haven't taken notice of these developments. IBM, for one, is rapidly outfitting its lines with SSDs, primarily as a means to push TPC-C and other benchmarks. That latest move comes on the Power System family, which is seeing new SandForce eMLC drives backed by 3 Gbps SAS controllers. That puts up to 512 GB and 30,000 IOPS at the ready for high-speed transactions-no SAN required.

But even that configuration is rapidly losing its state-of-the-art status. LSI Corp. is out with a new 6 Gbps RAID-on-Chip (RoC) controller that doubles random IOPS in RAID 5 configurations over the 3G standard. That device, the LSISAS2208, is already primed for PCIe 3.0 operation, which will vastly simplify future migration efforts as the industry pushes forward with faster connectivity.

Add all this together and you have the potential of a vastly simplified, yet highly flexible storage regime. At the moment, it doesn't offer the kind of management capabilities of a modern SAN, but there's no reason why an enterprising young software company couldn't come up with one should the demand arise.

And it's true that solid-state memory still costs a premium compared to magnetic storage, but if that cost ratio, which is already shrinking, were to account for the elimination of all that SAN infrastructure, SSD ROIs start to look pretty good.

We're probably not there yet, but the writing certainly is on the wall.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.