SHARE
Facebook X Pinterest WhatsApp

The Hyperscale Trickle-Down Effect

Software-Defined Storage: Driving a New Era of the Cloud You hear a lot about hyperscale infrastructure these days. Top web-facing entities like Google and Facebook have essentially re-invented the data center to accommodate the sheer enormity of their respective data loads, and in the process are starting to remake how key data elements are designed […]

Written By
thumbnail
Arthur Cole
Arthur Cole
Sep 5, 2014
Slide Show

Software-Defined Storage: Driving a New Era of the Cloud

You hear a lot about hyperscale infrastructure these days. Top web-facing entities like Google and Facebook have essentially re-invented the data center to accommodate the sheer enormity of their respective data loads, and in the process are starting to remake how key data elements are designed and provisioned.

For those who think hyperscale is moving on a separate but parallel track to traditional infrastructure, however, the fact is that its influence is already being felt across the broader enterprise industry. Traditional infrastructure, in fact, will increasingly adopt hyperscale components as part of normal refresh cycles.

According to Gartner, hyperscale servers from original design manufacturers (ODMs) will account for 16 percent of the overall server market by 2016, producing about $4.6 billion in revenues, with more than 80 percent of the stream going directly to customers rather than through traditional distribution channels. This gives hyperscale users, which are still only the tiniest fraction of the overall data industry, enormous influence when it comes to developing next-generation data solutions.

This fact has not escaped the notice of today’s original equipment manufacturers (OEMs), which have unveiled a steady stream of hyperscale solutions over the past few months. The latest is Cisco Systems, which recently took the wraps off a new line of Unified Computing System (UCS) servers aimed at combining scale-up architecture with in-memory processing to target Big Data workloads. The M-Series chassis features eight compute “cartridges” that house two Xeon E3-1200 processors, each of which has four DDR3 memory slots for up to 64 GB of main memory. As well, the chassis holds four 2.5-inch SSDs and a shared PCIe 3.0 8x slot that will support devices like the SanDisk Fusion ioMemory card. Processors are linked to each other via the 40 Gbps Cruz fabric.

Meanwhile, Fujitsu is adding hyperscale capabilities to its software-defined storage platforms with a new line of appliances that leverage the Ceph distributed storage system. The idea is to combine Intel processing and the Virtual Storage Manager with a Ceph-based storage environment that can handle file, block and object storage on a scale-out platform. Coincidentally, Ceph is also part of the OpenStack program, which gives the system a leg up when it comes to targeting large-scale cloud computing.

And at VMworld last week, VMware introduced the new EVO:RAIL converged hyperscale appliance that is designed to provide faster deployment of software-defined environments. The device is built around the vSphere software stack, although it has already gained the support of top platform providers like Dell, EMC, Supermicro and others. The design features an integrated, modular approach to infrastructure deployment, with a single management interface, auto discovery and other tools aimed at simplifying scale-out architectures in mid-market and even branch office settings. A single device supports up to 100 virtual machines with Virtual SAN capacity of 13 TB. It also provides a built-in gateway to the new vCloud Air service.

Today’s hyperscale, then, is all about, well, scale. Companies that are dealing with massive data loads need a way to support data environments without risking bankruptcy. For the traditional data center, however, it will be more about streamlining and convergence. Big Data will remain a chief concern going forward, but most enterprises will find they don’t need to build up to hyperscale levels to meet their needs.

But the pressure to do more with less will be forever present. And even though true hyperscale may not be in your future, there will still be a strong desire to broaden capabilities without expanding the infrastructure footprint.

The “hyper” technologies aimed at making data environments very large can be used to make them very small as well.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

Recommended for you...

Top Data Lake Solutions for 2022
Aminu Abdullahi
Jul 19, 2022
Top ETL Tools 2022
Collins Ayuya
Jul 14, 2022
Snowflake vs. Databricks: Big Data Platform Comparison
Surajdeep Singh
Jul 14, 2022
Identify Where Your Information Is Vulnerable Using Data Flow Diagrams
Jillian Koskie
Jun 22, 2022
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.