Convergence and Hyperconvergence: Assessing the Difference

Arthur Cole
Slide Show

Software-Defined Storage: Driving a New Era of the Cloud

If you feel you are already being left behind as the era of data center convergence unfolds, fret not—it seems that the convergence we have seen so far is just the opening act. The real action is yet to come in the form of hyperconvergence.

What’s the difference? Well, if the practice of tech-washing is any guide, it will be slim to none for most platforms—just a name change to help products stand out in the channel. But for those with a careful eye, there are ways of divining true hyperconvergence from plain vanilla convergence.

According to tech consultant Keith Townsend, hyperconvergence is a converged platform that is optimized for scale-out infrastructure. If the system you are evaluating consists of separate components engineered to work together, that’s convergence. If it is a modular device geared toward rapid expansion and built around a single server/storage chassis and wired with 10 GbE, that’s hyperconvergence. But before you get taken in by a server with on-board memory, note that true hyperconvergence also features an integrated storage controller and software that allows the kind of plug-and-play functionality that fosters rapid infrastructure scalability.

A quick checklist can help you determine if you are looking at true hyperconvergence, says Taneja Group founder Arun Taneja. Does it provide integrated data protection, WAN optimization and backup? Does it have built-in auto-tiering and capacity management? Does it present a single image regardless of whether the environment scales locally or globally? Is there full visibility and manageability at the VM level (no LUNS or volumes)? Does it provide policy-based protection and resource allocation? And is there a built-in cloud gateway offering integrated cloud-based compute and/or storage? If you answer “No” to any of these questions, then it is not a hyperconverged platform.

Neither Taneja’s list nor Townsend’s assessment is definitive, of course, so companies like Simplivity are free to tout systems like the OmniCube as the front line of the new hyperconvergence movement. The way the company describes it, initial convergence blended key legacy components like servers, storage and WAN optimization, but did little to ease capital and operational costs. This was followed by the addition of a virtual layer that at least provided resource pooling and more flexible load management. The latest wave (V3.0) integrates all components and provides highly flexible I/O functionality to incorporate remote sites and global data distribution. In that vein, the newest OmniCube iterations range from smaller boxes designed for branch offices to high-performance units sporting up to 24 CPUs and 30TB of storage.

Platform providers like Dell, meanwhile, are teaming up with hyperconvergence start-ups in the hopes of out-gunning rivals like HP in the converged infrastructure space. Dell recently unveiled the XC Series of appliances powered by the Nutanix OS operating system that incorporates key functions like DAS clustering and abstract provisioning in support of advanced software-defined architectures on a scaled-out, converged infrastructure. In this way, Dell gains a full hyperconverged portfolio without the time and expense of developing its own software environment, while Nutanix gains brand recognition and access to a global distribution channel.

Regardless of the level of integration or the degree to which it scales, converged infrastructure suffers from the same drawback: the need to scale up all resources in tandem regardless of what is actually needed. If, say, storage is a little lacking but compute and networking is fine, too bad. For a hyperscale environment like Google or Facebook, this isn’t much of a problem because the volumes are so massive to begin with. But for a typical enterprise, the choice is between provisioning resources you don’t need or adding a dedicated compute of storage layer that meets your immediate needs but ultimately undermines the simplicity that converged, modular infrastructure is supposed to provide.

Hyperconvergence, then, is a novel approach to encroaching infrastructure challenges, but it is not a catch-all solution for the entire enterprise industry.

Add Comment      Leave a comment on this blog post
Jul 10, 2014 10:38 AM Dave Demlow Dave Demlow  says:
Not true with HC3 from Scale Computing... we have a variety of node types with different levels of compute and storage resources all the way down to storage only expansion nodes that just add storage capacity and i/o performance without running virtual machines and at a lower cost. Not only are the storage expansion nodes lower cost, with other systems, if you add a compute node just to address storage capacity you most often have to purchase additional Microsoft OS and possibly application licensing which is often "per box". Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.