As far as pissing matches go, the emerging dispute over public vs. private cloud is right up there with Microsoft vs. Apple, Ford vs. Chevy and Pepsi vs. Coke. And the funny thing is, no controversy really exists at all, except in the minds of service providers and vendors who have product lines to protect.
Amazon’s Andy Jassy was at it again this week, telling the audience at the company’s Reinvent conference that private clouds are simply a ruse on the part of “old-guard” companies like IBM to keep the enterprise in thrall to yesterday’s hardware and software platforms. The public cloud, he argued, is not only cheaper and more agile but more reliable and, yes, more secure than any internal infrastructure you care to name. And even though AWS provides tools like VPNs and access management to help the enterprise with its hybrid infrastructure, this is merely the first step in porting the entire enterprise data center to the public cloud.
Hmmm, I give Jassy credit at least for being a master marketer extolling the virtues of his company’s strengths. And while it’s true that IBM is far behind in its ability to muster a public cloud offering, having recently shed its SmartCloud offering in favor of the recently acquired SoftLayer, the fact remains that Big Blue still has a lot of computing savvy at its disposal—like the Watson supercomputer—as the enterprise makes the transition from traditional silo-based architecture to advanced cloud configurations.
Make no mistake, even if companies like IBM, HP, Dell and others of the “old guard” throw their weight behind private and hybrid clouds, that isn’t likely to be a losing proposition. According to market research firm Technology Business Research, private clouds will drive close to $70 billion in sales by 2018, and oddly it will be the public cloud that drives demand for private infrastructure, not the other way around. The firm’s take is that initial private rollouts are intended to provide key functionality that public services can’t provide, and this will ultimately drive demand for increased flexibility and lower operating costs within the enterprise data center. The key challenge going forward, then, is determining which workloads belong on the public side and which should stay in-house.
This is where we get to the heart of the disconnect in the public vs. private debate, says CSG International’s Alam Gill. Just as a fruit salad can have both apples and oranges, so too can emerging data infrastructure consist of public and private cloud resources. By presenting the cloud as an either/or option, companies like Amazon do a disservice to themselves and the broader data community by attempting to limit the scope of cloud technology and other high-tech advances. Indeed, many of the cost-saving and flexibility-enhancing capabilities of the public cloud work equally well behind the corporate firewall, so the real challenge is not choosing the “correct” cloud architecture but in leveraging all available resources to drive innovation and produce the greatest return on the IT investment.
At some point, of course, we’ll stop using terms like “cloud” because the technology will simply be the standard means to implement data environments, and future IT techs may even shudder at the thought of the sprawling, convoluted hardware behemoths that were once standard facets of the data center. In fact, if all goes according to plan, human operators will rarely trouble themselves with whether data or applications are housed internally or externally—save for perhaps the most critical, valuable operations—as those decisions will be made by policy-based automation and orchestration platforms or by the applications themselves.
For the enterprise, then, the key takeaway is to not get caught up in vendor spats. These are par for the course in any technology upgrade and the most passionate argument either for or against any given solution is usually driven by profits rather than altruism.