A Vibrant Cloud Needs Solid Physical Infrastructure

Arthur Cole
Slide Show

Six Questions You Should Ask Your Cloud Provider

Employ a carefully defined risk analysis of IT systems and procedures before deciding which cloud technology and service is best for your organization.

The most important thing to remember about the cloud is that people like it - a lot.

That may seem like an obvious observation, but it points up the fact that its presence is likely to be felt across the entire IT spectrum. From the access device to core hardware infrastructure, technology development is now single-mindedly focused on utilizing the cloud for greater efficiency and data flexibility.

Even the way physical infrastructure is being deployed is undergoing change. Automation is a key driver now, with platforms offering a range of visibility and optimization tools to ensure that the permanent systems going into place will provide the broadest support for shifting virtual and cloud configurations. Puppet Labs' Razor, for example, provides auto-discovery and inventory updates aimed at supporting application delivery in DevOps environments. At the same time, the system automates the delivery of OS images using a model-based approach and RESTful open APIs that supports broad collaboration and plug-in support for all operating systems and boot sequences.

Of course, in many cases, the cloud will produce a contraction of physical footprint rather than an expansion. AMD stands as a case in point in light of the aggressive data center consolidation program under way. The company has already cut the number of data centers from 18 to 12 in the past three years, and is on track to drop all the way to three by 2014. The plan calls for shifting more resources onto the cloud while shutting down O&O facilities in high-cost areas like Boston in favor of lower-cost ones like Suwanee, Ga.

At the same time, hardware platforms will forge ever-closer ties with cloud resources, both as a means to lower costs and increase performance. Nvidia's new VGX platform, for example, aims to improve graphics and advanced applications across numerous devices by forging direct links between server-based graphics cards and client-side VMs. This is seen as a step up from previous initiatives like RemoteFX and View 5 in that it removes all abstraction between the physical GPU and the VM, preserving crucial drivers and abilities like Directx11, OpenGL and CUDA.

And because demand for cloud resources is growing so fast, the pressure is on to put the physical layer in place quickly and on budget. That's why much of the cloud will rest on commodity hardware, which is gaining increased favor among the very largest enterprises. Facebook, for example, has issued an entire commodity blueprint under its Open Compute Project, which seeks to nail down everything from board, server and rack dimensions to power supplies and cabinet designs. The company has numerous vendors on board, including HP, AMD, VMware and Dell - a testament to the clout of one of the industry's biggest hardware buyers.

Conventional thinking holds that hardware is irrelevent once you're on the cloud, which is true enough for the user. Cloud providers, however, will care very much about the design and deployment of future generations of hardware. After all, the ability to maintain service levels in the cloud will depend largely on what is happening on the ground.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.