More

    The Where, When and Why of Hyperconvergence

    Slide Show

    Digital Integration: Overcoming Enterprise Data Challenges

    Hyperconverged infrastructure is smaller, more efficient and less operationally complex than today’s collection of cobbled-together data center hardware. So it seems like a done deal that virtually all infrastructure will be hyperconverged in the very near future.

    To a large extent, this is true, but there will be pockets of resistance throughout the distributed data ecosystem in which the old way will be the better way.

    Still, hyperconvergence is the wave of the future. According to Technology Business Research, the market is poised to grow by 50 percent per year through 2020, producing a total market worth on the order of $1.6 billion. Even still, only about a third of the overall data infrastructure market will be hyperconverged at this point (it is a huge market, after all), so the transition will not be anywhere near complete until 2025 or later. The changes to infrastructure will coincide with Big Data, the IoT and myriad other developments related to the digital transition of the business model, says TBR analyst Christian Perry, as organizations seek to extend their data-handling capabilities without increasing hardware footprints or blowing their IT budgets.

    But as infrastructure becomes more distributed and functions like processing and storage are pushed to the edge, don’t expect hyperconvergence to take root everywhere. Windows IT Pro’s Orin Thomas notes that converged architectures are optimal for critical workloads running on scale-out, centralized infrastructure, but they produce a lower ROI when applied to smaller setups like the branch office file server or the random DHCP server running quietly in a corner somewhere. In most cases, these systems exhibit fairly static, low-level performance and can be scaled more easily with another hard drive or network card than an entirely new, modularized configuration.

    It also isn’t wise to deploy hyperconverged infrastructure simply to keep up with technological trends, says IT consultant Joel Snyder. By itself, it cannot process average workloads any better or any faster than normal architectures, but it can scale out faster and provide the kind of parallel processing that emerging webscale Big Data and IoT applications require. Therefore, it makes sense to identify the business need first and then work backward from there to devise the proper IT support configuration. As Gartner notes, the mid-market sweet spot for hyperconvergence is a data environment comprising 80 to 120 servers, 30 to 50 TB of storage and a virtualization rate approaching 90 percent, but even then factors such as workload requirements and criticality, management capabilities and your appetite for single-vendor solutions should play a role in the decision-making process as well.

    It is also important to distinguish between hyperconvergence, plain vanilla convergence and simple workload management across virtual architectures, says SimpliVity’s Jesse St. Laurent. True hyperconverence incorporates features like data acceleration, global unified management and built-in data protection to provide a fully self-contained data stack capable of native support of the most demanding workloads to ever hit the enterprise. It’s the difference between layering additional components on top of legacy architectures to treat the various symptoms of emerging data complexity and building an integrated solution that addresses the underlying problem on a fundamental level.

    This harmony between hyperscale infrastructure and emerging scale-out workloads will work to the enterprise’s favor, of course. Traditional workloads will continue to function perfectly well on traditional infrastructure while the new stuff will run in sync with the new modular hardware. And if all goes as planned, it won’t be long before the hyperconverged architectures start delivering greater value in the form of new processes, new business models and perhaps entirely new market opportunities and revenue streams.

    But the transition will be gradual and measured, rather than frantic and disruptive. And in the end, it should make everyone’s workday a little easier and a whole lot more productive.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles