The mainframe is back in business in the enterprise, a development that comes as a shock to those who predicted that the cloud would have taken over by now.
In reality, the mainframe was never absent from the enterprise, at least in the really large ones that need to pack substantial amounts of computing power. But now that scale and modularity are in big demand, many organizations are looking at the mainframe as a base on which to build Big Data infrastructure.
This is good news for IBM, of course, which has steadfastly supported the mainframe during the decades when distributed blade architectures were all the rage. The company recently launched two new mainframe models, the Emperor and the entry-level Rockhopper, running the new LinuxOne operating system based on Canonical’s Ubuntu distribution. The combo is targeted toward the rising cadre of Big Data tools, such as Apache Spark, MongoDB and PostgreSQL, and will likely become the focus of IBM’s contribution to the new Open Mainframe Project that looks to do for the mainframe what Google’s Open Compute Project is doing for scale-out commodity infrastructure.
One of the key advantages that the mainframe has in the Big Data era is the difficulty in migrating large workloads off of legacy systems onto the cloud, says Forbes’ Jason Bloomberg. In many cases, these kinds of projects are launched without a clear understanding of the challenges involved, only to get bogged down in complexity when it is too late to go back but exorbitantly expensive to move forward. An open source mainframe could prove to be a lifesaver because it affords the flexibility of a distributed architecture with the familiarity of a legacy system.
This is part of the reason IBM has had so much success with the mainframe over the years despite it being out of fashion, according to Compuware CEO Chris O’Malley. The z13 machine, on which the Emperor and Rockhopper lines will be based, saw sales improve by 9 percent in the latest quarter, and this is compared to an equally strong quarter in 2014. Meanwhile, recent surveys indicate that legacy mainframes are being repurposed for emerging workloads at a rapid pace and in fact are being seen as key elements in the drive toward greater innovation and the development of new business models.
And believe it or not, the mainframe can be the key to a more energy-efficient enterprise as well. As Rocket Software’s Bryan Smith noted at Network World recently, a 10 TB mainframe with 141 configurable processors offers equivalent output to a large cluster of x86 machines but with a smaller footprint and less energy consumption. As well, the hardware/software match in the mainframe is usually capable of leveraging gains in memory, I/O and processing to drive greater efficiency over non-native approaches.
Clearly, the mainframe is not for everyone. But it is fair to say that it is likely to become more mainstream in enterprise circles as even mid-sized organizations start to encounter increasingly heavy loads from collaborative, file-sharing apps and the sensor-driven, machine-to-machine (M2M) feeds that characterize the Internet of Things.
For these organizations, rapid deployment of physical-layer infrastructure will be a top priority, and the easiest way to do that is through an integrated computing platform with open software to accommodate future expansion.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.