More

    HPC Intent on Tipping the Scale

    Predicting the future is always risky. Even if you get it right, there are usually those who cannot deal with uncomfortable truths who will seek to prevent the inevitable, or at least bend it to suit narrow interests.

    In the enterprise arena, fortunately, the future is already here. Developments taking place in high-performance computing (HPC) invariably trickle down to work-a-day data environments, offering a rough path of sorts through the technology forest.

    These days, HPC is almost single-mindedly focused on scale, or exascale to be more precise. But while great strides have been made in this direction already, most researchers estimate that actual exascale systems are a decade away or more and will require not only new technologies, but new applications, data classifications, processes, procedures — in short, an entirely new way of looking at data environments.

    To be sure, though, technology is the foundation on which all the rest will sit, so it’s worthwhile to take a look at what’s in development now to gauge what the data world will look like in the near future. One recent breakthrough is a new silicon-based polymer developed by IBM and Dow Corning that enables printed circuit boards to use optical waveguides rather than electrical signals. The goal is to not only make supercomputers process data faster, but to do it with less energy. So far, the material has proven to be not only fast, but reliable as well, with lab results exceeding 2,000 hours of sustained performance in hot, cold and humid environments.

    At the same time, Intel has announced a breakthrough in parallel co-processing that it says will improve application acceleration and migration in HPC settings. Dubbed the Xeon Phi, the system is the result of the Knight’s Ferry research project that produced a Multiple Independent Core (MIC) architecture by loading 62 modified Xeon cores on a 22 nm process. As Forrester analyst Richard Fichera describes it, the development marks Intel’s response to the growing use of GPUs from Nvidia and AMD in HPC circles — and it’s x86-compatible.

    Processing isn’t the only area to be shored up on the way to exascale architectures. Micron is working on a new “hybrid memory cube” that the company says will break down both speed and capacity barriers in memory subsystems. The device features a high-speed logic layer and a through-silicon-via (TSV) die that produces a 15-fold performance gain and 70 percent energy reduction over DDR3. It is also 90 percent smaller than current RDIMMs and provides application-level scalability. Micron has teamed up with Samsung to form the Hybrid Memory Cube Consortium, which seeks to build interoperability with leading CPUs, GPUs and FPGAs.

    This is only the tip of the iceberg, of course. There are untold numbers of development programs already under way that are likely to push the limits of scale even further, although only a few are likely to make it to commercial production.

    Once we’re achieved exascale nirvana, though, what then? Well, zettascale (1021) is only three orders of magnitude beyond exascale (1018), followed by yottascale (1024).

    I wonder if anyone is even thinking about googolscale yet. Bet so.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles