I spent much of this week as the guest of IBM at Watson Research. As an ex-IBM employee, it brought back memories of the way IBM was and allowed me to think about what IBM was becoming. As I sat through the presentations and compared my perception of IBM now with my perception of IBM when I worked there in the early '90s, I saw a lot of similarities, both good and bad, and a lot of important differences.
The Systems Technology Group (STG) was the old heart of IBM, which started out as a hardware company and was built on the concept of the mainframe. The mainframe concept dominated computing for nearly three decades and, while renamed, is pertinent again with the emergence of cloud computing, a concept that has a lot in common with the mainframe model.
The Technology Circle
We often talk about how things move in cycles; trends rise, fall off, then resurface. Economies of scale are applied to everything from manufacturing to farming. This concept made the mainframe the success it was. Networking limitations forced the market into a more distributed model, but with massive improvements in network capacity and an equally massive reduction in network latency -- which often turned out to be a bigger problem than capacity -- we are again coming back to the idea that we can centralize and reduce the complexity of an environment.
This returns the market to one that could favor customized servers with massive I/O capability. We might call what results servers, and that is what they are, but they are also much closer to mainframes than the UNIX servers that first defined the term. Mainframes used to have low processing capacity, but you could run multiple jobs on them. When we went to servers and migrated away from batch computing to real-time and multitasking, concerns arose about the reliability of applications often isolated on their own unique server.
Virtualization and hypervisors running multiple jobs/applications on a server is increasingly common, but that suggests hardware that blends the concepts of the mainframe with the concepts of the server. That is the unique IBM System Z series, which has evidently turned out to be one of IBM's most successful products. IBM demonstrated how one large refrigerator-size Z Series server could replace a truck full of rack-mounted traditional servers.
Adapting to a Intel/Windows/Linux World
When I was at IBM, the company was all about its own platforms. It has a well-earned reputation of supporting these platforms long past their "use by" date. In the '90s, IBM was even trying to bring out its own proprietary version of Windows (OS/2) and had the most proprietary version of UNIX (AIX). In addition, it was all about proprietary hardware and even had its own versions of Intel processors. To say IBM was anti-standard was an understatement, and my own executive briefing at IBM back then showcased a massively anti-standard policy. Boy, what a difference a decade and a half makes!
IBM now has one of the largest service groups actively supporting Windows Server. It is considered to be the leading driver of Linux and is attempting to be the best at Intel standard servers. We'll get to the Intel servers in a moment, but at this event, IBM launched its first Linux standard Z Series product. This was big because that was the last truly proprietary computer product from IBM.
Today's IBM, while still leading the world (according to its presentation) in patents issued, is no longer constantly trying to reinvent the wheel. It's trying to be the best value-add vendor on top of standards the industry has selected. It does that, based on its strong financials -- a dance between its technology and the economy-of-scale advantages of using common technology platforms to create products that balance between the need to be cost-effective and positively differentiated.