SHARE
Facebook X Pinterest WhatsApp

Exclusive Hyperscale Data Center

Debunking the Top Data Center Myths Hyperscale is the “it” trend in data center architectural circles these days. Whether you are talking about cloud computing, Big Data, mobility or social networking, the pat answer to every challenge seems to be hyperscale. Not all hyperscale architectures are created equal—not even Facebook’s Open Compute Project (OCP), which […]

Written By
thumbnail
Arthur Cole
Arthur Cole
Sep 24, 2014
Slide Show

Debunking the Top Data Center Myths

Hyperscale is the “it” trend in data center architectural circles these days. Whether you are talking about cloud computing, Big Data, mobility or social networking, the pat answer to every challenge seems to be hyperscale.

Not all hyperscale architectures are created equal—not even Facebook’s Open Compute Project (OCP), which has garnered its fair share of headlines both for its innovation and the desire of Facebook to present it as a template for the broader enterprise industry.

The fact remains that web-scale operations and traditional data centers are very different animals, and the criteria used to define and build hyperscale infrastructure will often prove unworkable for all but the largest corporate environment. A key issue is cost, according to DLB Associates CTO and Vice President Mark Monroe. While the Googles and Facebooks of the world create their own economies of scale that allow them to buy stripped down white-box hardware in bulk, few traditional enterprises have that much clout. Large telcos and Fortune 500 firms already get discounts of up to 35 percent from current channel providers, and that usually comes with systems integration, managed services and other support functions. In that light, a 40 percent premium from an original design manufacturer (ODM) is not that impressive.

And cost is only one of the barriers to hyperscale adoption in the enterprise, says Sanbolic Executive Chairman Bill Stevenson. Most hyperscale architectures in the cloud are optimized for partitioned workloads and multi-tenant operations rather than high-availability, mission-critical functions. And from an organizational standpoint, most enterprises are staffed along horizontal lines in which each piece of the data infrastructure is overseen by a specialist. Hyperscale solutions usually follow a vertical strategy, in which one admin provides app support up and down the stack. This could always be incorporated into an enterprise version of hyperscale, but it is still way too early in the game to call it a done deal.

Nevertheless, research firms like Gartner are saying that the new ODM model is a direct challenge to Cisco, HP, Dell and other existing enterprise vendors. The company’s latest forecast has ODM server shipments reaching 16 percent of global activity by 2018, pulling in $4.6 billion in revenue. At the moment, more than 80 percent of ODM sales are going to the hyperscale set, but that could change as economies of scale allow them to become more price-aggressive and systems customization becomes increasingly accomplished through software.

IT Infrastructure

To make sense of all this, you need to look at the “rack endgame” as it relates to hyperscale technology, says IT consultant Stephen Foskett. While it’s true that OCP has garnered the lion’s share of attention, it is only one way of doing hyperscale, and it will likely lead to a disaggregated rack where servers and components are connected via a high-speed interface and anchored by a bulk storage server topped off with a Flash tier. Compare that to what Cisco is trying to do with its M-Series UCS server, which uses proprietary ASICs to aggregate servers using shared I/O resources. It is neither open nor software defined, but it does provide the ease of management and relatively simple deployment that the enterprise prefers, and it scales very well.

So in the end, it could very well be that OCP and platforms like UCS could progress along similar but distinct tracks, perhaps with crossover on key applications like Hadoop and Docker – more cousins than twins, in Foskett’s view.

This would set up a world in which two models of the data center would vie for control of the data environment: hyperscale cloud facilities and converged, modular enterprise infrastructure. Both have their advantages and disadvantages, depending on which workload you happen to be dealing with. And both are likely to see very healthy development curves as the cloud era unfolds.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

Recommended for you...

Top Data Lake Solutions for 2022
Aminu Abdullahi
Jul 19, 2022
Top ETL Tools 2022
Collins Ayuya
Jul 14, 2022
Snowflake vs. Databricks: Big Data Platform Comparison
Surajdeep Singh
Jul 14, 2022
Identify Where Your Information Is Vulnerable Using Data Flow Diagrams
Jillian Koskie
Jun 22, 2022
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.