Storage has emerged as one of the primary applications for public cloud services, so it should come as no surprise that it has become the target of an increasing number of private cloud platforms as well.
The past few months have seen a rising swell of private cloud offerings optimized for storage, promising to both lower costs and introduce a new level of flexibility and scalability to enterprise environments that traditional storage infrastructures can't match.
One of the newest is the CloudFolder system from Caringo Inc. It's a free application for Windows designed to add cloud capabilities to the company's CAStor cluster. The idea is to provide access to the Castor cluster through standard Windows folders, providing simple drag-and-drop functionality to transfer files or entire directories across shared repositories created by the CloudFolder app. The system can be used to establish a private cloud with a minimum 4 TB CAStor license, or it can be offered online to accommodate mobile or distributed workforces.
A similar download is available from ParaScale, which has devised the ParaScale Cloud Storage (PCS) platform to act as a metadata controller on Linux servers. Using Gigabit Ethernet connectivity, the system turns the server's file storage into a single global namespace that can be accessed from NFS, WebDav, HTTP or FTP devices. Because it uses an object-based cluster file system, adding capacity is a simple matter of hooking up another server, which has the added benefit of boosting I/O bandwdith due to the fact that all network nodes operate in parallel.
Private cloud storage is particularly useful for organizations that deal with extremely large data sets, as traditional storage systems can easily max out during periods of increased activity. The Stanford Genome Technology Center is a case in point. Researchers there set up an internal cloud storage environment to deal with the data coming out of their genome sequences, which can produce images up to 4 TB. Not only can the cloud scale more quickly than the center's existing HPC cluster, it has the double benefit of not requiring a team of NAS or SAN experts to manage it.
IBM is also eager to show what cloud technology can offer, so it has set up its own internal cloud that company researchers have so far used for upwards of 640,000 compute hours. The system is part of the company's Technology Adoption Program that acts as an "online sandbox" for developers to experiment with new tools and techniques. Of course, it also acts as a showcase to demonstrate to clients what cloud technology, particularly IBM's approach, can do for them.
With the simplicity of many of the new private cloud offerings, it's tempting to fall into that old trap that has the new technology obliterating current state-of-the-art systems to usher in an entirely new era of productivity. At the moment, however, that doesn't appear to be the case. Most vendors will readily admit that cloud technology works best as part of an integrated infrastructure that combines the best of the old and the new.
But by making private cloud storage available as free downloads to existing systems, it's a safe bet that the technology will quickly wind its way into the enterprise. And after that, who knows?