Core Wars: The Battle to Change the Virtual World


This week the core wars got started with AMD launching a new Opteron part with a whopping 12 cores that, amazingly enough, fit into the same thermal envelope (meaning it's as cool as) its prior four-core offering. This led Arthur Cole to ask "Are We Ready for a Multicore Universe?" Intel responded the following day with an eight-core part with hyperthreading (16 virtual cores) that fit in the same thermal envelope as its old four-core offering. It, for once, targeted Itanium loads and was designed to take SPARC out.


The products are well-differentiated from each other in that for dedicated loads, 12 real cores trump 16 virtual ones. But where massive threading is required, virtual cores are just fine. It will really depend on how the OEMs position their solutions. Both products will find unique homes in the high-performance computer space. This will get even more interesting when AMD rolls out Fusion next year, blurring the line as to what we even call a "core" as CPU and GPU computing collide and the number of cores goes vertical.


Let's talk about cores and what is coming.


Core Promise and Problem

The problem with MHz is that there are diminishing returns the faster you make a chip go because the energy needs to be consumed in an ever-smaller space to get the speed up. Gordon Moore came up with Moore's law, which was related to MHz initially and talked about the density of the chip, but the expectation was that as you eventually hit very high power states, you'd end up with something hotter than the surface of the sun that would require the kind of shielding normally reserved for nuclear power plants. Not particularly practical, and I, for one, wouldn't want it on my desk or in my building.


So the industry decided that instead of constantly going faster, it would divide up the load across multiple processors, first by putting more than one in a server or PC, and then by putting multiple cores into one processor, in effect turning a single processor into two. This then jumped to four and AMD found a way to do three.


The problem, and the reason why a three-core processor even made sense, is that applications tended to be single-threaded. In other words, they could only do one thing at a time. That meant only one core -- which was slower than it would have been in a single-core part -- would take the load and suddenly that new PC, workstation or server was underperforming the old one or was barely faster. Loads could be divided between the cores, but not easily or well, which meant that often only a fraction of the available power was being used. On a PC, you rarely needed more than two cores, making the three-core part adequate.


Applications would need to be rethought, but developers lacked the skills and schools the curriculum needed to teach multi-threaded programming. However, the industry moved to fix that. But rewriting applications would take years.


Virtualization: The Big Change

VMware entered the market followed by Microsoft's Hyper-V and these platforms could create virtual servers and PCs on multicore machines, allowing one multicore product to take the place of several servers. Suddenly cores were important because the more you had in a machine, the more servers you could consolidate into one, and the benefits included more floor space, lower maintenance costs and a higher utilization of the hardware you'd purchased.


There has been a resurgence of thin client activity as well because companies like VMware and Microsoft are demonstrating the ability to run lots of PC loads on servers now with much less impact on performance.


On workstations, cores are being heavily used for movie creation. DreamWorks recently demonstrated a project where high-definition video could be created in real time using a new HP/Intel multicore platform. This process, when it is in production, will take months and millions of dollars out of the movie-creation process. At the AMD launch event, HP boasted that one of its new AMD-based servers could consolidate more than 20 aging servers and provide full cost recovery in 60 days -- a rate thought to be impossible.


Core Wars

This sets the stage for what will likely be a battle royale between the chip companies and the OEMs to see who can bring the most cores and the best-tuned solutions to market. It won't end with those companies, as the likes of VMware and Microsoft, in an industry-wide fight, compete to see who can make the best use of this new equipment.


The benefits are more than just monetary, though. These products are more energy efficient and are vastly easier to manage than those they replace. The keen competition will keep focus on prices as well. Not a bad trend, all in all.


Wrapping Up: Looking Forward

The big coming inflection point will be AMD's Fusion effort, which will play across a number of platforms from desktops to servers and blend GPU and CPU loads. This could be particularly interesting to thin client deployments and scientific endeavors, which had already started to embrace GPU computing. That will be an even more disruptive change than moving from MHz to cores, and it should start next year.


One big change is that these processors can now take high-end computing loads and move into the space that was uniquely owned by RISC parts and Itanium. If these remaining high-end systems were easy to move, they were already moved, and what's left will take decades. So even though we can now say Itanium will die, that death could be several decades in the future because the savings pales against the cost and risk of migrating these remaining loads.


Finally, the kinds of things that will result from a move to massive multicore and blended GPU and CPU computing range from massive desktop appliance computing, more realistic virtual worlds (gaming and simulation), and cloud computing at levels we don't now imagine. A new world is coming; you have to wonder whether we are ready for it.


For some reason I have the "Star Wars" theme playing in the back of my head...