Most enterprises are far enough into the cloud deployment process to understand that there is more than one type of cloud. At the moment, many organizations are content to spin up a few hosted resources to gain extra storage or run a few key applications. But as cloud strategies become more refined, the style of cloud implemented on both private and public resources and the infrastructure that supports them can have a dramatic impact on future data objectives.
As I’ve pointed out, hybrid architectures are only as good as the private cloud allows them to be, and so far only a handful of organizations are pursuing what leading experts deem to be a true private cloud strategy. Part of this is because the cloud is still an ill-defined concept, but legacy infrastructure can be a major drag as well—particularly when it consists primarily of silo-based, bare-metal architecture. So clearly, the first step in any coordinated cloud strategy is to implement virtual and software-defined infrastructure to the broadest extent possible.
But if the road to public and hybrid cloud operations runs through the private cloud, what sort of private cloud should the enterprise strive for? At the moment, there are three generally accepted classes of cloud: software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS). Choosing between them is basically a matter of how much of your data environment you want to deploy on the cloud: just the applications (SaaS), the environment that apps are developed and deployed within (PaaS), or a top-to-bottom data center (IaaS).
When it comes to implementing a work-a-day hybrid cloud that is capable of application support, data bursting and the like, consensus is starting to point to PaaS as the most viable option. As Apprenda’s Sinclair Schuller points out, PaaS on the private cloud overcomes some of the key roadblocks that prevent internal apps from utilizing external resources—namely resource dependencies, performance, security and data migration. With PaaS, enterprises can establish policy-based data and application environments that can be easily integrated with public IaaS architectures.
This is why IBM and others have been quick to build up their own PaaS portfolios. Big Blue recently forged an agreement with Pivotal to back the open source Cloud Foundry PaaS project. It hopes to provide the enterprise with a framework for internal PaaS environments based on the OpenStack format that can easily integrate with leading public IaaS services from Amazon, VMware and others. IBM’s contribution will be in the form of a new build pack for the WebSphere Application Server that can be used in place of Cloud Foundry’s Java-based solution.
But if most public clouds provide IaaS, shouldn’t the enterprise devise the same architecture internally to create a single, broadly scalable data ecosystem? It’s a matter of complexity, says Andrew Binstock of Dr. Dobb’s Journal. With IaaS, you usually get a stripped-down virtual machine with maybe a Linux OS, which means it still needs all the language, logic and other components necessary to make it function plus additional coding to integrate it into the broader data environment. With PaaS—again, only on the in-house side of the cloud—enterprises gain a complete set of bundled services that can be pre-configured with each additional instance. On a smaller scale, it’s the difference between building a PC out of components and then loading the OS and all the programs yourself or plugging in a fully integrated machine right out of the box.
Simple geometry states that the straight line is always the shortest distance between two points. For enterprises looking to make the move from legacy infrastructure to the hybrid cloud, it seems that the straightest line is PaaS.