The growth curve for high-performance computing (HPC) has been heading up for some time. Long the purview of large government and research organizations, top industries like bio-tech and architecture/design saw a massive influx of HPC technology over the past decade.
Now, it seems the technology is about to head down market as even garden variety enterprises find they need the extra horsepower to accommodate increasingly complex data environments and rapidly expanding workloads.
Tabor Research, for one, predicts that number of HPC installations around the world will double over the next three years or so, even as the total market value remains flat at about $23 billion due to falling hardware prices. That means more computing power will find its way to a greater number of organizations, many of which are only just starting to comprehend the value that HPC could bring to their data operations.
One of the primary applications for HPC technology is archiving, according to Henry Newman, CEO and CTO of Instrumental, Inc. With many archives approaching the 10 PB level, and the notion of 100 PB systems no longer considered outlandish, tasks like checksum validation prove uniquely suited to HPC's floating-point operational architecture. When you think about, such a job is not altogether unlike genetic pattern matching or many of the other integer-intensive tasks that HPC systems handle regularly.
It's also telling that some high-end computing systems are starting to warm up to more mundane operating environments. A case in point is SGI's decision to load Windows Server 2008 R2 into its Altix UV 1000 platform. Powered by the Xeon 7500 and the company's NUMAlink 5 interconnect, the system offers the potential for a 256-socket cluster with more than 2,000 cores. Adding a Windows stack to the line, which would include the Hyper-V hypervisor as well as SQL Server and HPC Server extension, opens up new possibilities for data warehousing and high-end application support, including massively parallel Excel processing.
It's also possible that many enterprises could wind up on HPC platforms without even trying. As CTO Edge's Mike Vizard pointed out recently, applications of all stripes are becoming increasingly graphics-heavy. That's why many cloud services are deploying GPUs and HPC platforms to handle the data increase. All that's needed is broader band connectivity over the long haul and perhaps greater integration between GPUs and advanced ARM technology, like Nvidia recently showed as part of its Project Denver program, and even mid-level organizations will have little trouble accommodating new rich-media BI, CRM and other applications through the cloud.
As usual, integration poses the most challenging aspect of HPC in the enterprise. Maintaining application performance across different classes of processing and networking capabilities will require some pretty intricate fine-tuning on the part of either in-house staff or system integrators.
But with trends in data loads and data management on such a steady course, the extra punch of an HPC platform might be a necessity before too long.