More

    Hyperscale and the Average Enterprise

    Slide Show

    How NFV Is Linking People and Machines

    Hyperscale infrastructure is starting to make its presence known in IT vendor and distribution channels, but the question remains: What, if any, benefits will trickle down to the average enterprise?

    Clearly, most organizations do not need the scale of a Google or Facebook, but they are still facing increasingly heavy data loads that are about to get much heavier as Big Data and the Internet of Things take shape. In that light, does hyperscale offer the possibility of shrinking traditional data infrastructure in size so the enterprise can ramp up its own data-handling capabilities without pushing critical operations to third-party providers?

    Signs of hyperscale envy among enterprise users are already starting to emerge, says The Platform’s Timothy Prickett Morgan, if only because it shows the way to a more streamlined and automated data environment. Companies like Arista Networks are eager to capitalize on this trend with its Linux-based EOS network operating system, which allows both hyperscalers and standard enterprises to build customized applications using commodity hardware. And with the company’s new CloudVision platform, even the average enterprise can now standardize network states across distributed architectures in support of large Hadoop clusters and other forward-looking data initiatives.

    At the same time, traditional hardware vendors are starting to leverage their hyperscale know-how for the wider enterprise market. Dell recently introduced the PowerEdge C6320 server for both hyperscale and hyperconverged deployments. The device offers 18 Xeon E4-2600 cores per socket in a 2U form factor, plus up to 512 GB of DDR4 memory and 72 TB of local storage. For standard enterprise environments, the system features a remote access controller that automates routing management chores without the need for an on-server hypervisor or operating system. When workloads ramp up, the unit can be augmented with a GPU-based server like the PowerEdge C4130.

    But is it really that simple? Just throw server, storage and networking into a box, add some software and you’re off to hyperscale nirvana? Not exactly, says Datacenter Knowledge’s Bill Kleyman. In the first place, different use cases will require different hardware and software configurations, particularly when it comes to the storage component. If speed is crucial, you’ll need full SSD. For backup and archiving, SATA HDD should suffice, and a mixed environment will naturally call for a hybrid solution. As well, workloads – or more precisely, the policies used to manage your workloads – will dictate the level of abstraction on the storage controller and the degree to which the system will integrate with wider cloud architectures.

    This brings us to another interesting component of hyperscale architecture, the role of the cloud. According to a recent white paper from Citrix principal analyst Caroline Chappell, the cloud can be an effective partner but only if the major carriers coordinate their Networks Functions Virtualization (NFV) efforts with two other key elements: a modular, microservice-based application architecture and a robust Dev/Ops management system. This is in fact the new “three-legged stool” on which hypercloud services will reside, and since each leg is intertwined with the other two, they cannot be implemented piecemeal but must develop in tandem in order to provide a truly transformational environment.

    It seems clear, then, that the hyperscale market is already fracturing. Pure-play hyperscalers like Google and Facebook are crafting their own abstract data and management stacks atop ODM hardware while enterprises and cloud providers are relying on traditional channels to devise solutions for their own ends.

    In these latter cases, scale is not necessarily the operative word; flexibility, automation and interoperability with the wider data universe are equally important. But in all deployments, the end result is the same: greater computational power within a smaller, denser hardware footprint.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles