If past is truly prologue, then it shouldn’t come as a surprise to anyone who has studied the history of data infrastructure that virtualization, advanced cloud architectures and open, distributed computing models are starting to look a lot like the mainframe of old—albeit on a larger scale.
Everywhere you look, in fact, people are talking about pooled resources, higher utilization rates, integrated systems and a rash of other mainframe-like features intended to help the enterprise cope with the rising tide of digital information. Put another way: If the network is the new PC, then the data center is the new “mainframe.”
Of course, this new mainframe data center will differ from the old in a number of ways, most notably in the skill sets and development environments needed to run it. At the recent OCPSummit, for instance, there was no shortage of speakers highlighting the need for organizations to ramp up their knowledge of next-generation virtual and cloud technologies that will pull workaday infrastructure management tasks from physical layer infrastructure to more flexible software-defined constructs. It’s worth noting, however, that the virtualization and resource utilization techniques that ushered in the cloud were not created out of whole cloth during the client-server period, but were in fact carried over from earlier mainframe environments.
This fact isn’t lost on many of the leading cloud developers, such as Equinix’ Raouf Abdel, who recently stated flatly to Forbes: “The future is infrastructure. We are almost migrating back to the mainframe model.” Almost, but not quite. The key difference is that back in the day, the mainframe was a singular computing entity feeding any number of local terminals. These days, disparate data infrastructure is shared over great distances, sometimes half a world away, and often owned and managed by multiple organizations. But the end result is largely the same: a monolithic infrastructure that can be reconfigured and repurposed largely in software with little or no alteration on the hardware side, save the occasional upgrade to new, more powerful equipment.
This must generate a great deal of satisfaction among mainframe-using CIOs who have had to endure the scorn of colleagues for sticking with “old technology” during the 20-year client-server run. But the question remains, is traditional big iron capable of supporting modern scale-out data environments? Most certainly, says CA Technologies’ Denise Dubie. In fact, why bother ripping out long-serving mainframes when they can provide all the power, security and reliability needed for Big Data loads and burgeoning transactional environments? It might not make sense to base Greenfield deployments on the mainframe, but this is one area in which legacy infrastructure can provide a major head-start for large organizations looking to go big on the cloud.
To do that, though, you’ll need a new management regime that can bring the mainframe into the growing API economy. SOA Software, for instance, offers a new Lifecycle Manager suite that essentially creates a “RESTful Mainframe” that can provide REST-based API governance for zOS-based Web services. This should provide a ready solution for mobile applications in particular as they hunt for rapid and flexible access to backend enterprise systems, while at the same time providing data center executives with advanced service discovery and impact analysis across all mainframe, distributed and third-party infrastructure.
History, then, does in fact repeat itself, although never in exactly the same way. Today’s movement toward cloud computing and infrastructure convergence is happening at a scale that early mainframe developments could never have conceived of. But if we follow this trend to its ultimate conclusion, we could find ourselves accessing an open, federated data environment that literally stretches around the globe.
If that is the case, is it too early to start talking about the world as the new mainframe?