SSDs Are the Future, but Integration Issues Remain

Arthur Cole
Slide Show

Eight Critical Forces Shaping Data Center Strategy

Solid-state storage is quickly making the transition from novel new storage solution to standard enterprise practice. And yet many organizations are still struggling to find the best way to deploy the technology so that it serves both the legacy architectures of today and the virtual, software-defined environments of tomorrow.

As Veeam Software’s Rick Vanover notes, SSD prices continue their steady decline, which makes them a much more viable option when it comes to boosting storage performance, particularly for end-user devices. Within the data center itself, however, things can get a little tricky when the devices need to gel with the broader infrastructure roadmap that affects everything from servers to networking devices. Tools like the Flash Read Cache on vSphere 5.5 or Hyper-V’s new Storage Spaces system help to spread SSD capabilities across pooled resources, but effective deployment will still rely very much on how thoroughly virtualization is to take root in the enterprise or whether traditional storage infrastructure will remain to provide more effective tiering.

It is also becoming clear that server-side Flash technologies will play a bigger role in enterprise settings, most likely converging data infrastructure onto integrated compute modules. SanDisk, for example, has turned to Diablo Technologies to place an SSD on the DRAM memory channel, a solution aimed squarely at high-frequency data delivery across scale-out architectures. The ULLtraDIMM SSD is available in 200 and 400 GB configurations at the moment and even provides advanced management and security functions found on the company’s Guardian Technology Platform. IBM has already deployed the Diablo technology on its System x servers and is reporting latencies as low as 5 microseconds.


Meanwhile, Toshiba is about to extend its leverage in the NAND Flash market even further into the enterprise with the purchase of OCZ Technology Group. The move brings a robust line of SATA, SAS and PCIe devices under Toshiba’s wing, along with support technologies like cache and acceleration software. Not long ago, OCZ released the Intrepid 3000 drive with capacities ranging up to 800 GB, which should dovetail nicely with Toshiba’s controller and firmware technologies.

With all the chatter about speed and capacity, however, it is easy to overlook one other crucial aspect of solid-state storage: reliability. According to a recent report on Extreme Tech, the Intel 320 drive gets top billing when it comes to power-loss protection, even when power cycling the read-sync-write process in multi-threaded operations. The tests were rather limited in scope, but they do point out a crucial requirement for any enterprise that is increasing its reliance on solid state: Uninterruptible power supplies and data protection software should be crucial components of the new storage environment right from the start.

These days, speed and agility are supplanting capacity and processing power as critical elements in data environments, which means solid-state storage will continue to be the solution of choice for organizations looking to build equilibrium between storage and its server and networking counterparts.

But a fairly broad integration challenge must still be dealt with, and it is by no means certain that drive manufacturers have all the answers yet as to how solid-state storage can best bridge the divide between legacy infrastructure and burgeoning dynamic data platforms.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.