The idea of “composable infrastructure” is gaining steam throughout the IT industry, but is this really a new thing or is it simply another way to market the same modular and software-defined technologies that have already entered the channel?
In all likelihood, it’s a little of both.
HP Enterprise spelled out its vision of a composable future, dubbed “Project Synergy,” which naturally features a healthy dose of HP hardware and software all tied together with a unified API that covers functions like firmware and driver updating, BIOS configuration and network/storage provisioning. The aim is to disaggregate infrastructure to the point at which applications can quickly compile and reconfigure IT infrastructure to accomplish tasks quickly and with the least amount of resource consumption and contention.
The problem with current infrastructure is that it is too static for an increasingly fluid data ecosystem, says HP’s Paul Miller. Many organizations have bought into the concept of the “bi-modal” enterprise in which both traditional and emerging applications can be accommodated via dual sets of infrastructure. Not only is this wasteful, but it does not provide the kind of tailored experiences that users are demanding if, say, your servers provide disaggregated memory pools but still require manual connectivity to SAN storage. This is why composable infrastructure must be implemented from top to bottom and requires not only just software-defined architectures but entirely new forms of intelligence and application-level coding.
There will undoubtedly be a lot of composable infrastructure-washing as the market progresses, says Cisco’s James Leach, but there are ways to tell true purpose-built systems from rebranded ones. First, there will be new hardware designed specifically for disaggregation, such as the UCS M-Series and C300 servers. Then there will be an underlying framework to recompose these disparate resources into working environments. Queue the SystemLink ASIC, which supports functions like subsystem disag and extension of the control plane into hardware. This ensures that resources can be decoupled from their immediate counterparts and then recoupled with other systems elsewhere on the virtual architecture, even if they are miles away. All of this, by the way, must be overseen by a unified management framework that enables consistent, policy-based control over the entire architecture.
The key element in composable infrastructure is a new, more flexible interconnect between CPU, RAM, disk and networking components, which allows these commodities to be connected on the fly, says IT architect Rob Hirschfeld. Like most emerging technologies, however, these set-ups work well in the lab but tend to break down when scaled into production environments. First of all, physical resources lack an option for oversubscription, which means you will likely wind up with a lot of idle resources and resource fragments with every given workload. And for most normal enterprise operations, virtual machines are already highly composable, and portable as well.
So is composable infrastructure real, or just a pipe dream? The people who say they can deliver such a high level of functionality say it’s real, and they have the demos to prove it. But the road from the trade show floor to the working enterprise environment is a long one, and it’s a good bet that once all the bugs are worked out and the compromises are made, composable infrastructure won’t be quite as idealistic as advertised.
But it will likely be better than what we have now.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.