More

    IBM Crafting New Data-Centric Compute Architecture

    Slide Show

    How the Data Center Will Grow Up in Three Years

    In the long history of IT, many innovations that originally began life as supercomputer projects over time wound up being more broadly applied. A new class of supercomputers that IBM is building in collaboration with NVIDIA and Mellanox for the U.S. Department of Energy is likely to be just such an innovation.

    Under the terms of a $325 million contract with the U.S. Department of Energy, IBM is building supercomputers using a new “data centric” architecture capable of processing 100 petaflops using five petabytes of dynamic and flash memory. Based on IBM OpenPOWER processors, each system will be capable of moving data to the processor, when necessary, at more than 17 petabytes per second.

    Dave Turek, vice president of technical computing for OpenPOWER at IBM, says what makes these systems unique is that IBM is designing them in a way that processes and visualizes data in parallel, but also allows processing of data to be distributed across storage and networking elements.

    From a practical perspective, Turek says it’s no longer feasible to drive massive gains in processing using a homogenous processor architecture. What is required now is a compute fabric that allows the unique attributes of different classes of processors to be programmatically invoked by an application. The IBM supercomputer enables that to occur, for example, by providing high-speed interconnects between OpenPOWER and NVIDIA graphics processors.

    From an enterprise IT perspective, Turek says organizations should take note of this development, because IBM is building this architecture in a way that enables it to scale down to meet the needs of a broad range of applications. The intention, says Turek, is for IBM to work with members of the OpenPOWER alliance to bring those systems to market in the years ahead.

    In effect, Turek says we’ve collectively come to end of the road in terms of performance gains that can be derived from making faster general-purpose processors. The future of enterprise IT, says Turek, will be based on a more federated model that brings compute processing to wherever data happens to be located versus always requiring that data to be moved to some massive compute engine in the cloud.

    Naturally, it will be a while before IT organizations see the fruits of these research and development efforts in the enterprise. Nevertheless, the fact that these types of systems are now being built may be a fundamental demarcation point in terms of how IT systems are likely to be designed and constructed going forward.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles