The prevailing wisdom holds that cloud architectures will float comfortably on a layer of virtualization that itself will rest on commodity hardware. As long as underlying bulk resources are available in sufficient amounts, all of the fine-tuning and optimization for higher-level applications and services can be done on abstract, software-defined planes.
This isn’t necessarily wrong, but it isn’t the whole truth either – at least according to those who are developing next-generation, cloud-optimized hardware.
For the current crop of hardware vendors to survive much longer, it is hard to see how they can avoid devising cloud-facing product lines. According to IDC, about 30 percent of the IT hardware spend is in support of cloud infrastructure, up more than 14 percent from a year ago. The private cloud alone accounts for some $10 billion in revenue, generating annual growth of about 20 percent, while public infrastructure spending tops $16.5 billion and is growing at 17.5 percent per year.
A good portion of this activity is indeed the white box solutions that top cloud providers like Google and Facebook use for their hyperscale deployments, but the majority still goes to the traditional enterprise, which has neither the resources to buy hardware in large quantities nor the expertise to craft their own cloud platforms. This is where advances in hardware can make a difference, particularly on the processor level.
Intel has just entered into a partnership with eASIC, a developer of customized silicon targeted at key workloads like data analytics and security. The pairing will enable a new line of customized Xeons designed to provide targeted support for cloud functions and other emerging initiatives like Big Data and the Internet of Things. Intel officials say the eventual solutions could accelerate processing two or even three times that of field programmable gate arrays (FPGAs), which the company is also investigating as an advanced cloud platform, although they won’t be as flexible.
But with much of the cloud infrastructure going toward modular deployments, there is plenty of room to tailor advanced architectures using specialized hardware constructs. Atlantis Computing recently unveiled the Atlantis Hyperscale appliance, which the company says can cut the cost/performance ratio nearly in half. The device features an all-Flash storage architecture and advanced data reduction and I/O acceleration to reduce storage and memory requirements of its standard x86 processing architecture. Users can specify server hardware from HP, Cisco, Lenovo and Supermicro, with hypervisors from VMware or Citrix, all backed by a three-year, end-to-end service and support program.
Clearly, the days when hardware was the primary determinant of enterprise and data productivity are over. General-purpose computing across distributed cloud architectures will rise and fall with the ability of software architectures to provision and integrate server, storage and networking resources on the cloud.
But key applications will still require a bit more “oomph” than generic cloud architectures can provide, at least if the enterprise is looking for optimal performance.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.