More

    IBM Spectrum Increases Cluster Rates

    Slide Show

    Increasing Enterprise Application Performance with Route Optimization

    Utilization rates across clusters running Big Data applications leave a lot to be desired. While, clearly, a massive amount of data needs to be processed, not all of it is active at any given time. As a result, utilization rates of clusters made up of bare metal servers are abysmal.

    IBM today unveiled IBM Spectrum Computing, a suite of software designed to make it simpler to run multiple types of applications on a cluster. Bernard Spang, vice president of Software-Defined Infrastructure for IBM, says the ultimate goal is to eliminate “cluster creep” that results in only one Big Data application being deployed per cluster.

    IBM Spectrum Computing software consists of IBM Spectrum Conductor tools that enable multiple types of applications to share cluster resources, IBM Spectrum Conductor Spark, which bundles the open source Apache Spark analytics framework with the IBM Spectrum Conductor resource management software, and IBM workload management software called IBM Spectrum LSF.

    Collectively, Spang says, these tools allow IT organizations to increase utilization rates on clusters in much the same way a hypervisor enables multiple virtual machines to share a physical server today.

    Spang says IBM Spectrum Computing is designed to be the server equivalent of a storage initiative, dubbed IBM Spectrum Storage, launched last year by IBM. Together, these offerings are now the foundational layer on which IBM’s approach to software-defined infrastructure will be built.

    IBM, adds Spang, is taking this opportunity to rebrand the Platform Computing high-performance computing (HPC) systems based on x86 servers that it acquired in 2011 as IBM Spectrum Computing Systems.

    According to Spang, IBM is ultimately trying to enable a new era of multi-scale computing that will let Big Data applications not only share cluster resources, but also employ policies to manage them at scale. The degree to which IT organizations make that shift will vary. But the one thing that is clear is that deploying one Big Data application per cluster is going to be economically untenable for all but the largest of organizations.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles