More

    Hyperscale, Hyperconverged Infrastructure: Ready When the Enterprise Is

    Slide Show

    Five Ways to Address Your Data Management Issues

    It seems that the prevailing wisdom in data center circles these days is that Big Data will simply be too big for the enterprise. When faced with the enormous volumes of sensor-driven and machine-to-machine (M2M) feedback, the enterprise will have no choice but to push the vast majority of the workload onto the cloud.

    To be sure, the cloud offers a compelling value proposition when it comes to Big Data, but that does not mean that even small organizations won’t be able to build their own analytics infrastructure for the most crucial data.

    The mistake that many executives make when contemplating Big Data is applying those volumes to infrastructure as it exists today. In reality, the infrastructure of tomorrow will be more compact, more scalable and more attuned to these emerging workloads than the legacy systems currently occupying the data center.

    And when I say tomorrow, it literally can be tomorrow. Companies like Infinidat are already remaking data infrastructure as we know it by taking the lessons of hyperscale leaders and applying them to commercial products. The company is offering a storage environment that packs upwards of 2PB into a 42-unit rack and then wiring it for 750,000 IOPS at 12GBps throughput, and then throwing in seven-nines availability for kicks. The system is built around a three-controller, active-active-active architecture featuring DRAM, Flash and 480 SAS hard disks.

    The thing to keep in mind is that hyperscale must go hand-in-hand with hyperconvergence in order to make an effective enterprise solution, says Compuverde CEO Stefan Bernbo. A massive scale-out operation will be of minimal use if it requires an entire city block, so any architecture worthy of Big Data will have to be incredibly dense. This will undoubtedly involve combining compute, storage and networking into modular building blocks and then leveraging software-defined storage and networking to enable the dynamic scalability needed for emerging workloads. At the same time, hyperconvergence will keep the power envelope under control and provide for a streamlined physical layer that can be managed, updated and expanded much more easily than today’s disaggregated infrastructure.

    Data management within hyperscale and exascale architectures will also play a key role in their efficacy, says The Platform’s Timothy Prickett Morgan. This is why companies like Scality are turning toward object storage for their commercial-grade HPC configurations. The company has devised a commodity-based storage system called a RING cluster that marries object storage with native and distributed file system interfaces to eliminate bottlenecks at the gateway. At the same time, the company says it can deliver a 30 percent to 50 percent price premium compared to traditional disk-based NAS. Scality claims it has already fielded a 60-billion-object RING cluster containing dozens of petabytes per RING, with some customers already operating multiple clusters.

    Infrastructure

    The downside to this (and there is always a downside), is that hyperscale, hyperconverged systems can start to falter if capacity planning is not handled with care, says Quocirca analyst Clive Longbottom. In an interview with IT World, he notes that adding new hardware can get expensive once physical capacity is maxed, and if it is not done through the original vendor, can lead to integration and performance issues. It is also worth noting that most of the hyperscale/hyperconvergence vendors are start-ups producing proprietary platforms that may or may not exist in their present forms over the long term.

    Whether or not an on-premises hyperscale or hyperconverged data infrastructure is suitable for a given enterprise is a question best left for those who own the data and who have to sign the check for the hardware. Undoubtedly, the cheaper solution at the outset will be to port it all over to the cloud, but as time goes by and scale increases, there is likely to be an inflection point at which owning becomes cheaper than leasing.

    One thing is certain, though: You won’t be able to build Big Data capabilities into your data center simply by adding to legacy infrastructure.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles