The cloud has long been viewed as a convenient way to offload data from traditional IT infrastructure. Increasingly, however, enterprises are looking to tap the vast resources at its disposal as a means to build the high-performance computing (HPC) infrastructure they’ll need for Big Data and the Internet of Things (IoT).
HPC has been around for a number of years but was traditionally reserved for scientific research, statistical modelling and other functions that require multiple teraflops of performance. But now that data-generating digital services are starting to take over the traditional business model, organizations of all sizes are finding that they need HPC as well, although few have the means to build their own infrastructure in-house.
For cloud providers, this is emerging as a key market because it allows them to more fully utilize the massive data infrastructure they’ve been provisioning over the past decade. Microsoft recently upped its game in the HPC space by acquiring Cycle Computing, which has developed a system that allows users to run massive workloads across multiple clouds. The company already counts Novartis and NASA as clients, which, according to ZDnet, will still be allowed to spread workloads on non-Azure clouds. All future customers, however, will likely be confined to Azure.
Meanwhile, Equinix is looking to garner its share of the HPC market by teaming up with Rescale, developers of the ScaleX HyperLink system that enables rapid migration of large workloads to the cloud. The platform will be made available on the Equinix Cloud Exchange, giving users single-port access to both Equinix’ network of data centers and those of third-party providers. Under the ScaleX platform, users can also leverage other high-speed connectivity solutions, such as AWS Direct Connect, Azure ExpressRoute and IMB Direct Link.
Over in Europe, organizations will soon have access to HPC infrastructure from Huawei through the Open Telekom Cloud service that the company has established with Deutsche Telekom. The service will leverage InfiniBand fabric technology from Mellanox that provides up to 100 Gbps Enhanced Data Rate (EDR) service. The HPC Cloud 2.0 service will also provide high-performance local storage using a parallel file system and instant data erase for improved encryption. It is also compatible with OpenStack and open source solutions.
Other vendors are implementing HPC cloud infrastructure using local appliances that leverage public and private resources. Altair’s PBScloud.io solution utilizes an appliance model, which the company says allows users to deploy third-party applications under multiple licensing arrangements while at the same time accessing multiple public and/or private resource sets. In addition, the setup provides for customized security policies, end-to-end lifecycle management and rapid infrastructure deployment and provisioning.
The cloud has already proven that it is good at lowering the cost of infrastructure that organizations need to support standard workloads, and there is no reason to think it can’t do the same for HPC. In fact, it is probably the only way to democratize high-power resources that should give small and medium-sized businesses access to the same technology previously reserved for the Fortune 500.
What will be most interesting, however, is not the technology side of cloud-based HPC, but how it will develop new ideas and new ways to create value that would otherwise have been lost due to a lack of resources.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.