Slowly but surely the relationship between applications and processors is being redefined. For the most part, IT organizations have been applying general-purpose processors that for most part were optimized to drive transaction processing workloads. But over the years, the types and classes of application workloads have diversified with the addition, for example, of analytics applications and frameworks for managing data such as Hadoop.
In recognition of that reality, server vendors have been talking up the importance of thinking more granularly in terms of managing application workloads rather than just applications running on a server. The latest example of that thinking is a new Project Gemini initiative launched by Hewlett-Packard today. Instead of making use of classic x86 processors, Project Gemini is a server architecture that allows HP to federate workloads across a massive number of low-energy processors.
The first processor chosen by HP for use in Project Gemini is a next-generation 64-bit Intel Atom processor, code named Centeron. Paul Santeler, vice president and general manager for the Hyperscale Business Unit in HP’s Industry-standard Servers and Software group, says HP intends to mix and match processors across a Project Gemini server as part of an effort to make it easy to match specific processors with the application workloads they are best suited to run.
Obviously, a whole lot of IT automation is going to be required to make this brave new world of server architecture a reality. But HP claims it will have Project Gemini servers in production by the end of the year, largely because they are really just the first commercial instance of a Project Moonshot initiative that HP launched last year.
HP is not the only server vendor talking about the growing strategic importance of application workloads. IBM has been promoting a similar concept under the banner of Smarter Computing. But HP is taking that notion to the next logical extreme in that Project Gemini is a scale-out architecture that allows IT organizations to simply add capacity by plugging in another low-energy processor cartridge that is optimized for a particular application workload.
In short, servers are being reinvented. There may still be a need for the servers as we know them today, but increasingly it’s starting to look like the servers of tomorrow are going to be much more modular systems that will give IT organizations unprecedented levels of control over exactly what application workloads run where at any given time.