More

    HPC Processors Becoming Part of Everyday Enterprise Setting

    Slide Show

    Top Challenges Network Engineers Face in Migrating to Faster Networks

    The popular trend in server technology and design these days is low-power processing. New generations of ARM and Atom chips are aimed not only at reducing energy consumption in the data center, but also providing for a more flexible, scalable environment that is better attuned to the needs of cloud and mobile computing.

    But plenty of action still happens on the high-end side. While power envelopes and efficient data handling are still top concerns across all processor architectures, a strong desire exists to bring high-performance computing (HPC) into the realm of workaday enterprise infrastructure.

    A case in point is IBM’s latest tie-up with Nvidia aimed at matching the parallel capabilities of the Tesla GPU with the Power architecture in pursuit of advanced business intelligence and predictive analytics applications. The first iteration, due to ship early next year, pairs the 12-core Power8 device with the Tesla K40, ushering in what the companies describe as Watson-level supercomputing capabilities for the broader enterprise market.

    Meanwhile, Intel is close to releasing the new Xeon Phi processor, the so-called Knights Landing device, which will function as a host processor in standard rack configurations and provide enough performance to enable full, native-application support without the need of a co-processor. The intent is to improve processing and lower latency by not having to shuttle data across memory, interconnect and networking architectures. Best of all, the platform utilizes standard programming models, so developers won’t have to write code for individual machines.

    Yet another new HPC-class device is coming from Micron Technology. The Automata Processor (AP) serves as a data accelerator by leveraging parallel memory architectures to conduct high-speed search and analysis of unstructured data. The company describes the device as a “fabric” of perhaps millions of processing elements that communicate through a task-specific processing engine that can churn through data at a record pace to find hidden patterns and relationships. The company is targeting the device at high-end applications like bioinformatics, video/image analytics and network security.

    These developments are coming at a time when the line between standard enterprise environments and HPC is blurring. According to Research and Markets, nearly three quarters of the HPC market is on the x86 platform, with more than half utilizing quad-core architectures or higher. As in the enterprise market, Intel owns the vast majority of deployments, pushing its market share to 80 percent over rival AMD.

    These developments are further proof that underlying hardware is still relevant in the “software-defined” age. Certain workloads simply require the extra umph that high-performance platforms provide, and increasingly, those workloads will find their way into standard enterprise settings.

    Low-power architectures and the cloud in general will undoubtedly take on the vast majority of the load going forward, but enterprise managers would still be wise to keep higher-end architectures at the ready for tasks that require more muscle.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles