Now that enterprises are becoming more comfortable with the cloud computing model for basic applications like backup and recovery, attention is starting to shift toward some of the more advanced possibilities.
Key among them is Infrastructure-as-a-Service, which promises not only software and operating instances on-demand, but entire data environments. While a number of high-profile services are up and running (most of the time, anyway) the question remains how close we are from making the transition from ad hoc service-based infrastructure to full utility computing.
At the moment, implementing a working IaaS architecture is a bit more complicated than switching the lights on. Hardware and software integration, network pathways, usage and governance policies and a range of other items generally mark the "to-do" list when it comes to establishing cloud infrastructure. However, it seems the process is becoming more streamlined, particularly as enterprises transition to more cloud-like architecture within their own data centers.
And although it's too early to tell, hopes are high that the new generation of IaaS platforms will make it easier for enterprises, or individual business units, to spin up the resources they need to handle burgeoning workflows. The Google Compute Engine (GCE), for example, lets you run a Linux VM over a KVM hypervisor, along with associated network and storage capabilities. It also provides access to the Google App Engine service, allowing users to access and mash up multiple apps from Puppet Labs, RightScale and others.
Unlike other IaaS offerings, however, GCE is said to have a simplified user interface designed to appeal more to business managers and other users, rather than IT pros. The system uses a browser-based console and RESTful interface that provides easy steps to build and create instances, set up firewalls and the like. Apparently, the idea is not to appeal so much to the Fortune 500 crowd, which is already fairly advanced when it comes to the cloud, but to the broader mid-level market that may be more willing to adopt utility-style services if the cost, availability and reliability are right.
But it's not just the Googles and the Amazons that are tapping into utility services. Smaller firms like Wipro Technologies are expanding their IaaS footprints in pursuit of global utility platforms aimed at enterprise-class operation. The company's iStructure system provides multi-tenant virtual server hosting intended for mission-critical compute, storage, security and other services. The package is slated to roll out across Europe, the U.S and India, providing infrastructure, application and business process outsourcing.
Utility computing does not necessarily require a cloud, however. Companies like AOL are busy remaking their internal infrastructure to make it more like the automated environments available in power and communications grids. The company recently unveiled its "Micro Data Center" intended to provide services to various offices with little or no human oversight — sort of like the switch or distribution boxes that populate most utility networks. The units feature highly dense processing capabilities, low power consumption and remote support, maintenance and administration capabilities, providing a robust data environment in the field and a centralized management organization. The company hopes to drive not only a more efficient and less costly data environment, but one that is more resilient to outages and more responsive to local needs and conditions.
In many ways, IaaS is simply the utility computing model under a different name. But there are some key distinctions. For one, few organizations seem willing to place their entire data infrastructure in the hands of a third party like they do with electricity or telephone services. This might change, however, as comfort levels increase.
In the early 20th century, it was common for factories to generate their own electricity. All it took to make the switch was a stable infrastructure and a compelling pricing model.