New Servers Target Specialized Workloads

    It’s somewhat ironic that in an age when software-defined architectures are fueling demand for generic commodity hardware, specialized hardware optimized for key workloads is hitting the channel as well.

    The movement seems to be based on the assumption that the enterprise will spend less on hardware going forward, but when it does it will be for highly critical functions like Big Data analytics, advanced data modeling or key vertical applications. At the same time, cloud providers will continue to vie for these workloads by offering ready-made virtual infrastructure using optimized hardware and software.

    PSSC Labs recently released the newest version of its CloudOOP line, the 24000, which is targeted at Hortonworks, Cloudera, MapR and other Big Data platforms. The system doubles storage capacity over existing 12000 servers to 240 TB without expanding the 2U footprint of the original design. In addition to a pair of E5 Series Xeons with up to 23 cores and 512 GB of memory, the device sports 24 SSDs or SATA III and/or SAS hard drives with independent access to the motherboard to remove bottlenecks. Connectivity is via 10/40/100 GbE with standard Dual GigE bandwidth supplemented by Intel, Mellanox, Solarflare or other adapters.

    Meanwhile, Penguin Computing is out with the new Tundra Extreme Scale server built around the ARMv8-based ThunderX2 processor by Cavium. The platform can be built-to-order for specialized applications in finance, government, bioinformatics and other industries, and the company claims it can deliver HPC performance with lower TCO than rival platforms. The system provides dual-socket coherent connectivity and high-memory bandwidth and capacity, as well as x16 PCIe Gen3 ports for integrated scale-out capability. The Tundra ES Valkre built on the OCP form factor is shipping now, while a standard 19-inch rack-mount model will be available later this year.

    Supermicro is targeting key functions like oil and gas modeling, fluid dynamics and AI with the X11 system. In addition to high-IOPS performance from an all-Flash NVMe architecture, the family utilizes the Xeon Skylake processor linked to multiple Nvidia GPUs. The line consists of the 2U, four-node Big Twin chassis, as well as several blade designs ranging from 3U to 8U configurations, and Super and Ultra servers providing a mix of high-scale and high-performance designs.

    Specialized hardware configurations are also drawing the interest of those who are struggling to maintain market share in the commodity era. HPE recently announced a new series of Apollo and SGI machines aimed at HPC and artificial intelligence applications. The line consists of the Apollo 6000 Gen 10 and Series 10 machines, as well as the SGI 8600, all of which are optimized for high-speed algorithmic processing, rapid scalability and high utilization. The SGI 8600 features liquid cooling and integrated switching for petascale operation past 10,000 nodes, while the Apollo machines offer high scalability (more than 300 teraflops per rack for the 6000) or low cost and easy deployment (1U dual socket form factor with support for four GPU cards on the 10 Series).

    The enterprise is not likely to give up its internal infrastructure any time soon, even in the face of relentless pricing pressure from the public cloud. But it will continue to gravitate toward hardware models that provide simplified management and low cost of operations, as well as the flexibility to reconfigure resources at a moment’s notice to adapt to changing data requirements. At the same time, this infrastructure will be tasked with supporting workloads of a highly critical nature, rather than the generic back-office stuff that is migrating to the cloud.

    This is a tough challenge for server manufacturers in that they need to support highly targeted, yet highly customizable data environments at price points that satisfies increasingly stingy IT budgets.

    This isn’t likely to be a huge market compared to the data center’s glory days, but it should prove lucrative enough for a committed manufacturer to make a go of it.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles