New Hardware Configurations for the Cloud Era

Arthur Cole
Slide Show

Ten Things That Will NOT Happen in Cloud Environments for 2012

One of the initial promises of cloud computing is that it can be built onto existing data center infrastructure. No expensive hardware upgrades this time. To get the highest level of performance, simply implement the proper virtualization and resource allocation software and you're good to go.

That may have been true enough when enterprises were first looking to build clouds. But now that the time has come to optimize those environments, it seems that a little hardware refresh is in order after all.

Primarily, this is to facilitate the integration of cloud layering technology onto easily deployable hardware footprints, which are quite often the purview of separate business units or entirely different companies. Fujitsu, for example, recently released an optimized System Center 2012 platform consisting of Primergy servers and the ServerView Resource Orchestrator software under Microsoft's Private Cloud Fast Track partnership program. The end result is a complete server/storage/networking/management package geared specifically toward launching Hyper-V environments into the cloud.

As we pointed out last week, many top vendors are pitching preconfigured systems that dramatically reduce the time and expense of getting cloud infrastructure off the ground. IBM's PureSystems portfolio, for example, consists of various combinations of processor configurations, operating systems, storage and networking in a bid to lend support to multiple, third-party hypervisors. At the same time, EMC is gathering channel partners for its VSPEX cloud infrastructure platform to supplement core components from Brocade, Cisco, Data Domain and others. So far, there are 14 preconfigured versions of the system geared toward VMware, Citrix and Microsoft hypervisors.

This doesn't mean that only newly deployed hardware is capable of taking the cloud to the next level. Existing platforms can still get the job done, although they will still need a little work. Fortunately, in many cases, this will require a streamlining of current architecture rather than an expansion. A key example is storage. As Piston Cloud's Joshua McKenty pointed out on PC World, direct-attached storage (DAS) is both highly scalable and provides plenty of bandwidth for high-speed, highly dynamic data requirements. Naturally, this is also a lot cheaper than traditional 10 GbE or FC SANs.

The primary challenge in tailoring hardware platforms to the cloud is to avoid locking yourself into a particular development path that will restrict your flexibility down the line. Cloud architectures are nothing if not dynamic, so hardware needs to provide as broad a platform as possible to give enterprises the freedom to tailor applications and services to suit rapidly evolving user needs.

And if past is prologue, the next major leap forward in data environments will produce its own set of hardware/software requirements.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.