Scale has always been at the heart of the enterprise’s desire to get on the cloud. When it comes to advanced Big Data platforms and increased mobile traffic, it is much easier to spin up cloud resources than build out the data center.
But how much scale are we talking about? Theoretically, resources are unlimited in the cloud, but for practical purposes, how far have leading providers been able to push their infrastructure?
Mirantis, a Mountain View, California-based provider of OpenStack environments, recently spun up 75,000 virtual servers using 350 physical devices spread across multiple data centers in what it called the first benchmark of its kind to test the scalability of the OpenStack format. The company said it maintained the setup for more than eight hours, providing low-latency, on-demand service even though some of the resources being used were separated by 1,800 miles. The environment was established using tools from IBM’s SoftLayer division and tested with Mirantis’ own Rally benchmarking system.
Server instances are crucial for highly scaled environments, but operational speed has a role to play as well. After all, a virtual machine is of little use if the data can’t get in and out quickly. CenturyLink recently unveiled what it calls a hyperscale cloud platform that is earmarked for Web-scale operations, as well as Big Data processing and native cloud-based applications. The platform uses an all-Flash storage component capable of supporting 15,000 IOPS for such heavy data architectures as Couchbase and MongoDB. The company is also looking to extend its footprint across multiple data centers spanning both coasts of the U.S., as well as London, Paris and other key cities.
The need for scale in the cloud is also leading to the rise of exchange services that seek to match users with the appropriate sets of resources. Denver’s CoreSite, for example, recently launched a real-time management portal called the Open Cloud Exchange that uses standard API’s to link disparate services together. The exchange has drawn more than 20 participants so far, including Zadara Storage and infrastructure provider iland. The system uses a one-to-many Ethernet switching architecture to establish a working community of cloud and networking services that provides direct connectivity to native, client-side systems and software. At the same time, the exchange provides a common set of performance standards and security mechanisms.
This kind of multicloud, multiprovider scalability, however, is only possible through extensive interoperability that comes from broad industry cooperation or adoption of formal open source standards. As Dell’s Lance Boley points out, open source provides an effective means to scale cloud environments without putting all your faith in a single provider. At the same time, it offers a means to customize key aspects of the environment using in-house resources and talent. But be forewarned, open source can be prickly when it comes to resolving lingering issues that may or may not be covered under the standard.
Despite the ability to scale resources in the cloud, enterprises should take care not to provision more than they actually need. Data loads are increasing, to be sure, but at a relatively steady pace. That means you have plenty of time to chart out expected growth well in advance, with key resources set aside for bursts of data activity.
As leading cloud consumers are already finding out, costs are low compared to traditional data infrastructure until volumes reach a critical mass. The last thing the enterprise needs is to become reliant on scaled-out cloud infrastructure that ends up costing more per month than a depreciated physical plant.