Here Comes the All-Flash Data Center

Arthur Cole

The world’s data environments are changing as users and the enterprise wake up to the possibilities offered by cloud computing, mobile access and social media.

It’s only natural, then, that new forms of data infrastructure should arise to help deliver on these expectations while still keeping costs within a manageable level. And since data speed is now the primary means to increased productivity, as opposed to storage capacity and processing power that was so crucial to earlier generations, the rise of flash storage architectures is all but inevitable.

Earlier this month, however, Fusion-io took this to an entirely new level with the release of the ioScale platform, a scale-out version of the ioDrive that the company plans to ship in packages of at least 100. Providing more the 3 TB on a single half-length PCIe slot, the device can place nearly 13 TB on a small form-factor server, leading some to call it the harbinger of the fully flash-based data center. The system also provides built-in compatibility with Fusion-io’s ioMemory module software development kit, which includes Atomic Writes, directFS and other APIs that will allow numerous applications to run natively on flash.

The question remains, though, who will benefit the most from an all-flash infrastructure? As I mentioned earlier, anyone who values speed over raw capacity will probably be first in line. And these days, that includes top web-facing enterprises like Facebook, which is already said to be deeply infused with Fusion-io technology and likely had a strong hand in the development of the ioScale system. In fact, according to Fusion-io CEO David Flynn, Facebook is already looking past the “hyperscale acceleration” model for its data centers and is working on a “disaggregated, rack-scale architecture” featuring “silicon photonics” technology — effectively placing optical data transmission on the silicon layer.

How much will all this affect the mainstream IT industry? Well, it depends on the definition of mainstream. Traditional enterprises will most likely continue to employ a mix of solid-state and mechanical storage as a means to address the various data tiers that are becoming increasingly commonplace. But as Wikibon CEO David Floyer pointed out earlier this week, flash will likely be the home for active data, while low-performance disk and tape architectures will be relegated largely to bulk, long-term storage.

It would seem, then, that flash technology satisfies the increased demand for always-on, always-available data environments that are increasingly supplanting the traditional siloed, static architectures that have been built up over the years. We haven’t reached critical mass just yet, but it isn’t hard to imagine a not-too-distant future in which cloud-based data and services are seen as the norm, rather than the exception.

And once that happens, everything that many of us old-timers used to think of as cutting-edge technology, disk storage included, will seem quaint to the new generation of knowledge workers



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date