The Hyperscale Trickle-Down Effect

    Slide Show

    Software-Defined Storage: Driving a New Era of the Cloud

    You hear a lot about hyperscale infrastructure these days. Top web-facing entities like Google and Facebook have essentially re-invented the data center to accommodate the sheer enormity of their respective data loads, and in the process are starting to remake how key data elements are designed and provisioned.

    For those who think hyperscale is moving on a separate but parallel track to traditional infrastructure, however, the fact is that its influence is already being felt across the broader enterprise industry. Traditional infrastructure, in fact, will increasingly adopt hyperscale components as part of normal refresh cycles.

    According to Gartner, hyperscale servers from original design manufacturers (ODMs) will account for 16 percent of the overall server market by 2016, producing about $4.6 billion in revenues, with more than 80 percent of the stream going directly to customers rather than through traditional distribution channels. This gives hyperscale users, which are still only the tiniest fraction of the overall data industry, enormous influence when it comes to developing next-generation data solutions.

    This fact has not escaped the notice of today’s original equipment manufacturers (OEMs), which have unveiled a steady stream of hyperscale solutions over the past few months. The latest is Cisco Systems, which recently took the wraps off a new line of Unified Computing System (UCS) servers aimed at combining scale-up architecture with in-memory processing to target Big Data workloads. The M-Series chassis features eight compute “cartridges” that house two Xeon E3-1200 processors, each of which has four DDR3 memory slots for up to 64 GB of main memory. As well, the chassis holds four 2.5-inch SSDs and a shared PCIe 3.0 8x slot that will support devices like the SanDisk Fusion ioMemory card. Processors are linked to each other via the 40 Gbps Cruz fabric.

    Meanwhile, Fujitsu is adding hyperscale capabilities to its software-defined storage platforms with a new line of appliances that leverage the Ceph distributed storage system. The idea is to combine Intel processing and the Virtual Storage Manager with a Ceph-based storage environment that can handle file, block and object storage on a scale-out platform. Coincidentally, Ceph is also part of the OpenStack program, which gives the system a leg up when it comes to targeting large-scale cloud computing.

    And at VMworld last week, VMware introduced the new EVO:RAIL converged hyperscale appliance that is designed to provide faster deployment of software-defined environments. The device is built around the vSphere software stack, although it has already gained the support of top platform providers like Dell, EMC, Supermicro and others. The design features an integrated, modular approach to infrastructure deployment, with a single management interface, auto discovery and other tools aimed at simplifying scale-out architectures in mid-market and even branch office settings. A single device supports up to 100 virtual machines with Virtual SAN capacity of 13 TB. It also provides a built-in gateway to the new vCloud Air service.

    Today’s hyperscale, then, is all about, well, scale. Companies that are dealing with massive data loads need a way to support data environments without risking bankruptcy. For the traditional data center, however, it will be more about streamlining and convergence. Big Data will remain a chief concern going forward, but most enterprises will find they don’t need to build up to hyperscale levels to meet their needs.

    But the pressure to do more with less will be forever present. And even though true hyperscale may not be in your future, there will still be a strong desire to broaden capabilities without expanding the infrastructure footprint.

    The “hyper” technologies aimed at making data environments very large can be used to make them very small as well.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles