The Beginning of the End of the SAN?

Arthur Cole

Before you think I'm crazy for writing this particular blog entry, please answer these two questions honestly: How much fun do you currently have managing your storage network? How expensive is it to both outfit and supply said network?

The reason I ask these two questions is that, the more I look at things, the more convinced I am that the lovable SAN as we know it today could very well be a thing of the past in a few short years. To understand why, you have to look back to the reason why SANs were developed in the first place. The short answer is that servers and storage had to be separated. Sure, you could put a disk drive on a server, but a full-blown array had to be located elsewhere due largely to its size and the massive amount of heat that both server and storage hardware generate. To make the SAN work, enterprises had to invest in lots of cable and many expensive boxes to get data from here to there.

But what if there was a way to put massive amounts of storage directly on the server without taking up a lot of room or melting the innards of your hardware? If you've already guessed SSDs, you win the prize.

I first encountered the idea of replacing SANs with SSDs late last year when interviewing Mark Peters of the Enterprise Strategy Group for this article on ways to integrate SSDs into magnetic environments. At the time, even he hedged his comments a bit by saying that it was a theoretical possibility, but magnetic-disk-based SANs will probably co-exist with SSDs for quite some time.

That may be, but the fact is that some of the newest SSD technology coming out is proving to be so fast and can provide so much capacity that it makes one wonder why anyone would want to deal with the hassles of a SAN when you can have practically all the storage you need over a simple PCIe connection.

As evidence, I give you two recent releases. The first is the RamSan-20, a single-level cell (SLC) NAND flash device from Texas Memory Systems that offers 450 GB over a PCIe x4 slot. It has a 333 MHz PowerPC chip for a controller and can deliver 120,000/50,000 IOPS read/write performance. It also has error-correction technology and four Xilinx RAID chipsets, all within a 15-watt power envelope.

Not good enough? Well, try the ioDrive Duo from Fusion-io. When released later this year, it will have a capacity of 1.28 TB and clock in at 186,000/167,000 read/write IOPS. What's more, you can link the devices together to generate up to 500,000 IOPS at data rates of 6 Gbps. Multi-bit error detection and correction, on-board self-healing and RAID-1 mirroring between modules are also part of the mix.

As these devices are brand new, we have yet to see what kinds of actual deployments are coming down the channel. At the moment, server/storage vendors are toying with the lower-level Flash offerings of last fall. Sun Microsystems, for example, recently launched a new line of x64 rack servers and CMT blades outfitted with Intel's X-25E drives. They provide 64 GB tops and generate 35,000/3,300 read/write IOPS performance.

Once you start putting serious capacity on the backs of servers, though, you'll have to deal with data management issues, like how to ensure that users on one server can access data stored in another.

But great things often come from small beginnings. And while it's true that most enterprises aren't about to cast their SANs aside on a whim, the idea of a SAN-less enterprise doesn't sound so crazy to me anymore.

Add Comment      Leave a comment on this blog post
Mar 13, 2009 1:26 AM Anthony Anthony  says:

"The short answer is that servers and storage had to be separated. Sure, you could put a disk drive on a server, but a full-blown array had to be located elsewhere due largely to its size and the massive amount of heat that both server and storage hardware generate."

That is not todays driver for a SAN, Storage management and provisioning storage to servers is the driver. SANS are the way forward for any medium to large scale server deployment. Your point of view is only valid if your architecture forced you to adopt a SAN against your will and you yearn to put the problems back in place.

In short, its the beginning of the death of the hard drives and the evolution of the SAN.

Mar 13, 2009 7:59 AM David Flynn David Flynn  says:

A clarification and a quick comment...

First, that's 6GBps or GBytes/s not 6Gbps.  So, it's more like 60Gbps.  In other words, it's actually a very significant fraction of the memory bandwidth.  (We've pushed up to 9GBytes/s in some HP boxes).  Try sucking that across a glass straw to a SAN.

Also, note that HP last week released their IO Accelerator for their C class BladeSystem.  And, since it's based on Fusion-io's ioMemory technology (that's on the PCIe bus), it blows away Sun's offerings that are SATA attached (and of much more meager in capacity).

One can get 32 of these IO Accelerators across the 16 blades of a single C class and have roughly 10TBytes of capacity with over 3.2 million IOPS and 25GBytes/s of bandwidth - all from within just a quarter rack.  Best of all, the storage took no additional space - you still get all 16 servers.

Add 10GigE to each blade and an iSCSI stack, maybe a little fail-over and  replication between blades and, there you go - call it a centralized storage appliance.

Point is, with this kind of capacity and performance density the differentiation between servers and storage appliances / SAN's is disappearing....

Goodbye proprietary, vertically integrated storage infrastructure, goodbye SAN.

-David Flynn

CTO Fusion-io


Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.