Commodity or Custom: The Next Evolution in Server Development

Arthur Cole

Commoditization has ruled the roost in enterprise-class server circles for the past decade or more, but the rise of new web-facing applications and cloud-scale data centers is creating a new paradigm: Not all servers are created equal anymore.

Part of this is due to the realization that not all data loads are created equal either. Traditional servers were designed to accommodate batch processing and other database-related applications, which stressed raw processing power and the ability to handle large but relatively few data packets. As small-packet, high-volume computing became increasingly popular, providers first turned to low-cost blades in highly dense configurations, and then began customizing hardware to suit their specific needs.

Primarily, this was a strategy reserved for the largest of the large, most notably Google and Facebook, both of whom required such large volumes that they actually created their own economies of scale. Lately, however, the numbers have apparently improved to the point that second-tier providers like Rackspace have embraced customization as well. The company has joined Facebook’s Open Compute Project (OCP) and is set to deploy its first set of non-commodity hardware in April — tweaked slightly from Facebook’s original design to suit Rackspace’s unique customer requirements. In fact, the company boasts a more streamlined hardware footprint because its machines do away with many of the unneeded components that populate commodity designs.

This is nothing but good news for specialty designers like Quanta. Originally a supplier to traditional server makers like IBM, the company began selling directly to Facebook about three years ago and the rest, as they say, is history. It now is on track to draw more than 80 percent of revenue through direct sales to users, and is well on the way to adapting the same approach to storage and networking systems as well.


Chip designers like Marvell are also poised to make out well in this new market. The company recently inked a deal with Chinese search engine Baidu to power its data centers with ARM-based chipsets. The quad-core ARMADA XP SoC will form the heart of Baidu’s customized server fleet, comprising the CPU, storage controller and 10 GbE switch in a low-power envelope that will help the company control energy costs as it scales up resources.

It is important to note, however, that commodity systems still have a lot to offer the cloud industry, particularly as new technologies make it easier to push database applications onto cloud architectures. MIT is developing a new system called DBSeer that enables greater visibility into the way large databases utilize resources, effectively giving managers greater ability to limit the over-provisioning that takes place in both internal and external virtual environments. Researchers predict the system could cut hardware footprints by 95 percent, dramatically reducing Capex requirements at home and Opex requirements in the cloud.

For server manufacturers, then, the era of “build it and they will come” is rapidly coming to a close. As the practice of optimizing infrastructure on the fly to suit user demands becomes the new normal, service providers will come under increasing pressure to deliver those optimal environments at a moment’s notice. And to do that, they will increasingly turn to customized infrastructure, both as a means to improve capabilities and cut costs.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data