dcsimg

The Beginning of the End of the SAN?

Arthur Cole

Before you think I'm crazy for writing this particular blog entry, please answer these two questions honestly: How much fun do you currently have managing your storage network? How expensive is it to both outfit and supply said network?


The reason I ask these two questions is that, the more I look at things, the more convinced I am that the lovable SAN as we know it today could very well be a thing of the past in a few short years. To understand why, you have to look back to the reason why SANs were developed in the first place. The short answer is that servers and storage had to be separated. Sure, you could put a disk drive on a server, but a full-blown array had to be located elsewhere due largely to its size and the massive amount of heat that both server and storage hardware generate. To make the SAN work, enterprises had to invest in lots of cable and many expensive boxes to get data from here to there.


But what if there was a way to put massive amounts of storage directly on the server without taking up a lot of room or melting the innards of your hardware? If you've already guessed SSDs, you win the prize.


I first encountered the idea of replacing SANs with SSDs late last year when interviewing Mark Peters of the Enterprise Strategy Group for this article on ways to integrate SSDs into magnetic environments. At the time, even he hedged his comments a bit by saying that it was a theoretical possibility, but magnetic-disk-based SANs will probably co-exist with SSDs for quite some time.


That may be, but the fact is that some of the newest SSD technology coming out is proving to be so fast and can provide so much capacity that it makes one wonder why anyone would want to deal with the hassles of a SAN when you can have practically all the storage you need over a simple PCIe connection.


As evidence, I give you two recent releases. The first is the RamSan-20, a single-level cell (SLC) NAND flash device from Texas Memory Systems that offers 450 GB over a PCIe x4 slot. It has a 333 MHz PowerPC chip for a controller and can deliver 120,000/50,000 IOPS read/write performance. It also has error-correction technology and four Xilinx RAID chipsets, all within a 15-watt power envelope.


Not good enough? Well, try the ioDrive Duo from Fusion-io. When released later this year, it will have a capacity of 1.28 TB and clock in at 186,000/167,000 read/write IOPS. What's more, you can link the devices together to generate up to 500,000 IOPS at data rates of 6 Gbps. Multi-bit error detection and correction, on-board self-healing and RAID-1 mirroring between modules are also part of the mix.


As these devices are brand new, we have yet to see what kinds of actual deployments are coming down the channel. At the moment, server/storage vendors are toying with the lower-level Flash offerings of last fall. Sun Microsystems, for example, recently launched a new line of x64 rack servers and CMT blades outfitted with Intel's X-25E drives. They provide 64 GB tops and generate 35,000/3,300 read/write IOPS performance.


Once you start putting serious capacity on the backs of servers, though, you'll have to deal with data management issues, like how to ensure that users on one server can access data stored in another.



But great things often come from small beginnings. And while it's true that most enterprises aren't about to cast their SANs aside on a whim, the idea of a SAN-less enterprise doesn't sound so crazy to me anymore.


Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.