Amid all the talk about virtualization and the cloud, there is the larger concept of the "dynamic infrastructure."
It's here that most of the benefits of the latest technology advancements come to fruition: on-demand and global-scale load balancing, unlimited resource allocation, near-perfect system utilization and greatly simplified management and execution. This is where the old way of thinking about the data center as a collection of discrete hardware and software components gives way to the new approach that relies on shared resource pools and the like.
But how exactly is all this supposed to work? Surely, somewhere in all of this, there are still boxes processing data and running the applications needed to get the work done, right?
Of course -- but the thrust of the dynamic infrastructure is not in what type of hardware is in place, but rather on the way in which humans will interact with technology.
One of the leading voices for this concept is Lori MacVittie, technical marketing manager for application services at F5. Her take is that the dynamic infrastructure improves both technical and process efficiency through advanced automation and improved application networking capabilities. Using standard APIs that provide access to a given infrastructure's control plane, processes can be automated and integrated into existing management systems, providing a level of consistency that can't be matched by manual operation. It also goes a long way toward overcoming the burgeoning management responsibilities that come with rapidly increasing numbers of virtual machines.
You would think that massive pools dynamically allocated processing capability would naturally imply large blocks of commodity servers, but MacVittie says no. In fact, as the cloud becomes more prevalent, the server becomes more irrelevant, at least from an application-delivery standpoint. Much better to view the network as a collection of resources and nodes because it places the emphasis not on provisioning and maintaining the compute power needed to run the application (remember, those resources should always be available somewhere), but rather on your ability to provide those applications to the people who need them at the appropriate service levels.
But even with the server diminished in this way, there are still some key factors to consider when deciding which server architecture to adopt for this brave new world. Research and Markets contends that enterprises that have been scrapping their mainframes in favor of distributed systems could be in for a world of hurt. Only mainframes, they argue, have the memory, cache and reallocation capabilities to prevent the kinds of outages that can bring a cloud service down. And when you view it from a computing unit basis, mainframes are also a bargain compared to distributed systems.
Dynamic infrastructure is about a lot more than just servers, however. In order to be effective, it will have to cobble together storage and networking into a cohesive environment capable of supporting vast numbers of applications and users.
That kind of integration isn't going to happen overnight, nor is it simply the by-product of virtualization and cloud computing.
The goal is certainly worthwhile, but I expect the journey will be rather difficult.