Supercomputing for the Common Enterprise

    How fast is fast? In the world of supercomputers, pretty darn fast. But the real question is how long will it be before that speed and other attributes trickle down to the enterprise level?

    First, some numbers for you. The Department of Energy has just fired up the new Titan cluster at its Oak Ridge National Laboratory in Tennessee. The system is rated at 20 petaflops — 20 trillion calculations per second — which is matched only by the IBM Sequoia machine at the Lawrence Livermore Lab in California. And if that weren’t impressive enough, Titan achieves that level of performance drawing a mere 9 MW of power, which is about the same draw as a small town, but it represents only a 30 percent increase over the existing Oak Ridge supercomputer, Jaguar, that maxed out at only 2.3 petaflops.

    Built by Cray, Titan consists of more than 18,000 nodes containing Nvidia’s Tesla K20 GPUs and AMD’s 16-core Opteron 6274 devices tied to more than 700 TB of memory. This marks the first time that commodity processors have been harnessed into such a high-level computing design. Its primary function will be running simulations for things like materials development, combustion, climatology and nuclear studies.

    The fact that this kind of performance can be mounted on the same basic technologies available to run-of-the-mill enterprises points up the fact that higher-power computing is not far at all, although few organizations have need of a full 20 PF — at least not right away. Already, though, commercial platforms are under development that utilize commodity systems in highly scalable, parallel architectures. For instance, Dell has mounted a 10 PF system at the University of Texas using its Intel-based PowerEdge C8000 machine. The design uses a standard 8-blade C8220 chassis, with each blade containing two 8-core Xeon E5-2600s, and full system memory of 272 TB. And Dell recently ported the system over to the C8220X chassis, which ups the memory and allows the use of graphics processors as well.

    But even that does not seem to be the floor when it comes to supercomputing power. A company called Adapteva is working on a system it has named Parallella that could come in at a target price of only $99 using open system components and the company’s own multicore Epiphany chips. The system is said to churn out 32 gigaflops within a 2-watt power envelope. Initial configurations utilize 16-core Epiphany-III devices, along with a dual-core Zynq-7010 A9 CPU and 1 GB of RAM on a PC circuit board. Within a few years, however, the company is looking to build systems with more than 1,000 cores.

    Some may argue that as this massively parallel processing power finds its way down market, it ceases to be “super.” This is just a question of semantics, however.

    The fact of the matter is that it won’t be long before the same kind of technology that drives the most data-intensive applications in science and advanced research will be available at reasonable costs to help the enterprise confront its own Big Data challenges.

    And if you couple that kind of computing power with the scalability and flexibility of the cloud, very soon there won’t be much that the enterprise can’t handle.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles