Although the fully software-defined data center is more theory than reality at the moment, enterprises of all stripes are still keenly interested in lifting key applications off of hardware to reside in the more dynamic world of virtual infrastructure.
To date, these efforts have focused mostly on back-office functions—like CRM and business intelligence—that deal mainly with database management and number crunching. When it comes to graphics-intensive applications like CAD and visualization, however, heavy use of virtual architectures tends to bog down the CPUs that populate virtualized servers.
Lately though, chip designers have been churning out a steady stream of GPU-equipped hardware aimed at lending greater support to virtualized, data-intensive workloads. At VMworld this week, AMD brought out its Multiuser GPU that is said to support up to 15 users with full ISV-certified workstation performance for graphics-heavy and highly accelerated applications. The device features open SR-IOV connectivity, as well as support for OpenCL, OpenGL, DirectX and other rich media environments, not to mention proprietary solutions like vSphere and ESXi.
Meanwhile, NVIDIA is out with the beta version of the NVIDIA GRID 2.0 platform that is already drawing support from leading hardware providers like Cisco, Dell, HP and Lenovo. The system is intended to support rich-media applications across a multitude of devices, giving graphics workloads the same support that standard apps get from social media, collaboration and other sharing-based work environments. The new version doubles user density over the existing platform to 128 users per server and utilizes the new Maxwell GPU to double performance as well. Nvidia has also added blade server support, allowing it to function within highly dense server configurations.
Companies like Pivot3 and Amulet Hotkey are already looking to leverage GRID 2.0 for hyper-converged infrastructure. The two have teamed up to deploy Pivot3’s HCI platform on Dell PowerEdge M1000 and FX2 servers for key Amulet clients in the enterprise, defense and engineering fields. The solution incorporates the GRID 2.0 into Amulet’s CoreStation PCoIP platform to enable hyperconverged deployment for heavy graphic workloads, including VDI, while maintaining centralized security and management.
At the same time, Supermicro is employing a range of GPUs throughout its hyperconverged portfolio in a bid to gain a larger share of emerging advanced computing markets. The company has placed GPUs within its 2U Ultra Hyper-Speed SuperServer to support data-intensive workloads, backing them up with DDR4 memory, PCIe 3.0 connectivity and 12Gbps SAS support. As well, the MBI-6118D MicroBlade pairs CPU and GPU performance with shared L3 cache and 128MB of embedded graphics cache, all while driving greater power efficiency through a 14nm architecture. And the GPU Blade, part of the SuperBlade portfolio, now supports up to 120 GPUs/CPUs per 42U rack.
Server designers have been utilizing GPUs for higher performance and lower power consumption for some time, even for non-graphical workloads. But with CPUs poised to start feeling the burn from Big Data, the Internet of Things and other data-intensive applications, careful deployment of GPUs is likely to become more of the norm in the data center than the exception.
If integrated properly, a GPU can not only provide offload support for CPUs, but can bring more of the enterprise workload off bare-metal hardware and onto the more flexible virtual layer.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.