IBM's Parallel Vision of Cloud Computing

Michael Vizard

In cloud computing, one of the most important technologies in the IBM arsenal might turn out to be a set of code that is over 10 years old.

The IBM General Parallel File System (GPFS) is a shared-disk file system currently used mostly in large clusters. But according to Jai Menon, what most people forget to appreciate about GPFS is the inherent parallelism built into the file system.

Menon says IBM envisions a future of cloud computing where different data centers will be dedicated to performing one type of task or running certain classes of application workloads. To make that vision a reality, IBM sees GPFS and other orchestration technologies that take advantage of parallelism defining the future of cloud computing.

In the shorter term, we'll probably see parallelism play out on processors first. For example, multicore processors make it much more feasible to execute program code in parallel. That concept will then scale up to the data center and eventually out across the cloud, said Menon.

Menon says IBM is already moving down this path with its evolving Flex architecture, which is designed to run application workloads on the most efficient architecture available within a data center environment made up of mainframes, RISC and Intel-class servers.

IBM is in the early stages of redefining enterprise computing using a more holistic approach to managing all the systems and applications involved. Whether it can steal a march on competitors in this regard remains to be seen.

Add Comment      Leave a comment on this blog post
Oct 29, 2010 8:10 PM Luke Vorster Luke Vorster  says:
"most efficient architecture available within a data center environment"... (really?) This can only be accepted if proven - 'redefining enterprise computing' is a tall phrase that doesn't mean much without a radical paradigm shift that changes the way people think (like, since the last 20 years)... I doubt this is even at the stage of a twinkle in IBM's eye... no free lunch theorem (or almost no free lunch theorem) is what I believe the major challenge to be... In my humble (but adamant) opinion, I can't see it happening on any particular platform... what is need is a new regime that abstracts operating systems, hardware computation platforms, and one-fits-all products like Flex, so far away that the 'machine' becomes a mathematical model... Algebraic relationships between various computation devices could sidestep so many of the 'weak' heuristics, and empirical methods, that are used to choose the factors of scalable, robust, HPC software solutions... Maybe IBM should think further into the future... like, centuries rather than the next financial quater. An algebra that spans across a number of topological spaces would be a starting point - then, map the platforms to the spaces, and use the algebra to decide how an algorithm would perform on one of the computation platforms... This whole cloud industry is about to be reclaimed by the custodians of cyberspace - and they're not all human ;) Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.