Is the so-called "private cloud" truly a worthwhile endeavor?
That's the question many IT professionals are asking themselves as more and more vendors pitch the concept as a way to add cloud-like functionality to existing data center infrastructures.
But aside from the complaints from purists who argue that by its nature cloud computing requires the use of external, third-party resources, there are some legitimate concerns that internal clouds may not be as useful as some proponents argue.
The primary criticism is the cost factor. The main appeal of the public cloud is that it provides a full set of IT resources without the expense, mostly on the hardware side, of building it from scratch. If you already have that infrastructure in place, or are planning to build it anyway, what exactly are you saving by turning it into a cloud?
It's a good question, but as Stephen Foskett points out in his Cloudonomics blog, the private cloud still has value even without the Capex calculation. You get a tremendous amount of flexibility when it comes to deploying new applications or shifting data loads, for example, and you also have a controlled test environment to determine what works and what doesn't before heading out to the public cloud.
That's part of the reason many traditional IT firms are looking to branch into both private and public cloud services. Microsoft, for one, has been wrestling with the delivery of its Azure service for private use. While the company has said it won't provide the service on local servers, it will add many of its features to the Dynamic Data Center Toolkit for Enterprises. That way, clients can dip their toes into the cloud at their own pace, and have an Azure-compatible infrastructure in place once they decide to go public.
And then there are companies like Rackspace, which just announced a private cloud service that is nevertheless hosted on their own architecture. Say again? How can a third-party deliver a private cloud from servers not owned by the client? By offering a single-tenant architecture for each customer rather than sharing resources among multiple customers, CTO John Engates told me.
"Private clouds are always done on dedicated infrastructure behind the private firewall," Engates said. "Ours is the same because it is still behind the customer firewall that we have dedicated to them in our datacenter. It's no different from a customer having a co-located datacenter because we are not sharing any hardware resources at all."
In this way, he said. enterprises avoid the upfront capital costs of traditional data center infrastructure, but still gain the security and availability of a private cloud. The company has also taken steps to open up its cloud architecture by issuing its Cloud Servers and Cloud File APIs under the Creative Commons 3.0 Attribution license. Developers can now copy, implement and even modify any of these specs, while ensuring that their applications are compatible across public, private or hybrid clouds.
Despite these developments, the future of the cloud is still, well, cloudy. There simply is not enough real-world experience yet to determine whether any of these approaches is truly cost-effective or provide the functionality needed by today's enterprise.
In theory, it looks good, but until the early adopters are ready to place significant data loads onto the cloud, all we have to go on is conjecture.