The Next Step in Multicore

Arthur Cole

New multicore designs continue to roll off the drawing board even as the call for more parallel-friendly applications grows louder.

 

Technology prognosticators are almost unanimous in the opinion that the single-core processor will be a thing of the past very shortly, save for perhaps the most specialized of operations. Multicore will be the ubiquitous choice for everything from PCs and servers to game systems and mobile devices.

 

At the International Solid-State Circuits Conference this week, Sun will unveil the first design plans of the 16-core "Rock" processor, an architecture that delivers two simultaneous threads per core, allowing it to process 32 requests and 32 scout threads at once. Even though each core is slower than a typical x86, Sun provides linear scaling and a shared cache architecture that allows it to lower energy consumption even as more cores are added to the die.

 

At the same conference, Intel is set to debut the four-core "Tukwilla" Itanium processor, packing more than 2 billion transistors on a 700mm x 700mm wafer. The 2 GHz device holds more than 30 Mb of cache and sports the QuickPath processor interconnect that is expected to show up on Xeon chips later this year. On the downside, power consumption on the Tukwilla comes in at 170 W.

 

Intel is also opening up a conversation on programmable multicore architectures with a series of white papers describing the company's work on simplified parallel programming models likely to emerge in future platforms. One such approach is the "data center on a chip" model that would pack 100 or processors onto a single chip using a 32-core tera-scale processor design, plus techniques like simultaneous threading (SMT) to manage four threads per core. Other approaches include on-die integration of multiple cores, memory controllers, bridges and graphics engines, as well as on-die application acceleration.


 

Still, Intel continues to sound the warning to application developers that they will have to start designing with parallel processing in mind if they want to see better performance in the future or simply maintain the performance of single-core systems. Since heat and power consumption prevent processor speeds from increasing, multicore chips lower the clock rate with the idea that multiple slower CPUs can be quicker than a faster one. But that only works if the software lends itself to multicore processing. Limit yourself to a single core and operations will slow down.

 

At this point, it's hard to imagine that any software developer is not aware of the multicore challenge. But it's fair to say that few have openly publicized new "multicore friendly" versions of their applications. If they don't start to show up soon, there could be a lot of angry customers out there.



Add Comment      Leave a comment on this blog post
Feb 5, 2008 7:06 AM Jim Falgout Jim Falgout  says:
Good comment about threads. Even in Java, which has very good support for threads, writing multi-threaded applications is not easy. I'm working on a framework called DataRush that allows developers to build data-oriented applications without ever having to worry about threads or locks or dead-lock contention. It's in beta and free for download. Check it out and let me know what you think:www.pervasive.datarush.com Reply
Feb 5, 2008 12:26 PM Louis Savain Louis Savain  says:
Great article. We should all keep a eye on what's happening in multicore technology because things are bound to change drastically in the near future. In my opinion, companies like Intel, Sun, AMD, IBM and the others are trying to fit a square ped into a round hole. Threads are not the answer to parallel processing and programming and they know it. Threads are the second worst invention in computing. They are hard to understand, non-deterministic and prone to errors and security flaws. There is a better way to do parallel processing but, unfortunately, the CPU manufacturers are not listening.Programmers have been successfully simulating parallel processes without threads for decades. We do it in neural networks, cellular automata, simulations, video games, spreadsheets, etc... It's not rocket science. What is needed is CPU support at the instruction level. Anyone who is interested in the future of multicore and parallel computers should read this article:Parallel Programming, Math and the Curse of the Algorithmhttp://rebelscience.blogspot.com/2007/10/parallel-programming-math-and-curse-of.htmlThreads should be declared illegal. :-) Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.