More

    HPC in the Enterprise: Not as Daunting as It Seems

    Slide Show

    Amazing Growth in Data Requires Innovative New Solutions

    It seems that the enterprise is both intrigued and yet intimidated by the thought of incorporating high-performance computing (HPC) into the data center.

    On the one hand, who doesn’t want a powerful, scalable and highly flexible data infrastructure at their disposal? On the other, the financial, technical and logistical challenges to making it work properly are undoubtedly daunting.

    Or are they? Most people view HPC in terms of the home-grown, scale-out infrastructure that populates the data centers of Google, Facebook and other Web giants. But as the technology matures, it is being integrated into increasingly modular footprints that can be incorporated into the standard enterprise footprint relatively easily.

    Indeed, as Enterprise Tech’s Alison Diana found out from top executives at Cray, Psychsoftpc and other HPC specialists, enterprise deployments are seen as the next big market opportunity, which is why many of the leading platforms are being retooled with advanced power and cooling systems for deployment into critical data infrastructure. The HPC industry, in fact, is working to overcome the persistent myths that advanced computing requires specialized support, or even entirely new data centers, before it can play an active role in emerging data processes like analytics and high-speed transactions.

    Many channel providers are already releasing HPC solutions in appliance form that are designed to integrate into existing infrastructure the way a normal appliance would. Cloudian’s recently released HyperStore FL3000 array, for instance, scales up to 3.8 petabytes within a single rack and features hot-swappable components for service continuity, plus self-service policy support that allows the enterprise to create a variety of storage environments for diverse application sets. The system also features built-in S3 support, backed by object streaming and dynamic auto-tiering, to allow workloads to be shifted on- and off-premises quickly and easily.

    Meanwhile, VMware is taking steps to ensure that newly deployed HPC infrastructure meshes well with legacy virtual environments to enable highly scalable software-defined architectures. The EVO SDDC platform, which replaces the EVO: RACK system, is designed to manage thousands of nodes within a single server and then extend that management structure across rack-scale deployments. At the same time, it can reach beyond physical and virtual servers to incorporate software constructs, switches and other elements within the data center, providing an integrated management environment across traditional, HPC and virtually any other type of infrastructure that finds its way into the enterprise.

    One of the key aspects of enterprise-class HPC infrastructure is visibility. As The Platform notes, the initial web-scale users quickly found out that highly integrated environments can be thrown out of whack by even small failures or simple discrepancies between components. This is why they routinely drill down into everything from memory sticks to power substations to ensure tight performance across the ecosystem. And since the standard enterprise cannot afford such a burden, companies like Vapor IO are leveraging new software and open hardware designs to make it easier. The company’s Vapor Chamber, in fact, is designed specifically to match the performance that Amazon and others provide with their scale-out architectures, which organizations can then use to duplicate these cloud capabilities, or compete with them.

    Probably the biggest misnomer regarding hyperscale, however, is that it is only useful to organizations that experience the heavy data loads of the web-scale giants or the Fortune 500 multinationals. But the fact is that every organization that has a computer will experience heavier data loads in the years to come – at least, if they want to stay in business – and that the infrastructure needed to deal with these loads needs to be smarter, more efficient and have a smaller footprint than today’s plan.

    Rather than thinking in terms of hyperscale, then, it would be wiser to consider it as hyper-productive.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles