Scaling Infrastructure to the Extreme

    The enterprise has been scaling up resources for quite a while now, but as the old saying goes, “You ain’t seen nothin’ yet.”

    Scale is most certainly being driven on Big Data infrastructure, but it is also ramping up for traditional applications like ecommerce and CRM as well. As the pace of business increases, so too does the volume of data, and the infrastructure of today is proving to be neither large enough nor fast enough as organizations strive to remain viable in the emerging digital economy.

    According to Jason Waxman, GM of Intel’s Cloud Platforms Group, upwards of 80 percent of compute, storage and networking infrastructure will fall under the classification “scale computing” by 2025. This is going to affect development of everything right down to the processor, he tells Data Center Knowledge’s Yevgeniy Sverdlik, as infrastructure gravitates away from raw power to increased connectivity, programmability and more integrated solutions.

    But simply achieving scale is not the only challenge – it also has to be done quickly, with minimal disruption to existing processes and at a reasonable cost. Finding a platform that satisfies all three of these requirements is the goal of DataStax and Mesosphere, which recently linked the former’s Cassandra-based database software with the Mesosphere datacenter operating system (DC/OS) in a bid to put Big-Data-class scalability within reach of the average enterprise. The aim, as Martin Van Ryswyk, EVP of engineering at DataStax, explains to TechRepublic, is to not only produce scale, but scale across distributed architectures. In this way, organizations can build giant, masterless clusters that equalize performance for all datasets while at the same time effectively removing downtime even when large portions of the infrastructure go down.

    Clearly, this cannot be accomplished under traditional hardware constructs. A more modular, interchangeable hardware approach is necessary, which is why more systems developers are pursuing the “composable infrastructure” strategy championed by HPE. Last month, a company called DriveScale came out of stealth with a new design that pulls Hadoop infrastructure out of the rack into a more scale-friendly environment that also fosters flexible pooling of resources and enhanced asset discovery and maintenance. A key capability in composable designs is the ability to scale compute and storage infrastructure separately, avoiding the tendency to over-provision one or the other as in traditional modular infrastructure.

    By its nature, scale-out infrastructure is also highly dense, so as to fit it within a reasonable physical footprint. This has the unfortunate side effect of producing heat – enough to overpower most conventional air-cooling systems as the hardware count ratchets up. This is why Dell and others are making a push into liquid-cooling, which provides a more effective heat sink than air and can be more easily integrated into tight, modular systems. The company’s Triton system, developed in conjunction with Intel, allows processors to run at higher frequencies, says eWeek’s Jeffrey Burt, delivering nearly a 60 percent boost in performance when outfitted with the right chip, like the 200W Xeon E5. The system is being displayed as a proof-of-concept at the moment, although it seems likely that it will become a common facet of hyperscale environments developed by the company’s Extreme Scale Infrastructure (ESI) unit.

    At the moment, the enterprise is just beginning to enter the transition from legacy infrastructure to scalable, modular systems. And if truth be told, many organizations will find that it is much easier to push many of their resource-hungry applications onto the cloud where they will likely wind up on someone else’s hyperscale infrastructure.

    And with automated, orchestrated workload management systems on the rise, data will start to find the most optimal means of support pretty much on its own before too long. In most cases, that will be on scale-out infrastructure.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles