You know a technology has finally arrived when it gets its own trade show.
That was the case last week at the Multicore Expo in San Jose, Calif., where experts proclaimed that the new chips represent a fundamental shift in computing technology that will shake the industry to its architectural core. But while many predictions of this sort are a lot of hot air, it just so happens that, in this case, the experts are probably right.
Not only is multicore likely to shift the way hardware is designed and utilized in networked settings, but software designers will likely have to embrace a number of new (or make that old) development concepts, such as parallelism and multithreading.
And designers show no signs of letting up. Indeed, with power consumption now a major concern, simply increasing the clock speed on each new generation of chips doesn't cut it anymore. The majority of system-on-chip (SoC) designers are now working on multicores and are already leaving dual cores in the dust in favor of quads or higher.
The manufacturers don't seem to be slowing down either. Intel is looking to integrate hyperthreading system components like memory controllers and direct core links on the Nehalem chip due out next year. Another goal is to allow designers to choose from a variety of designs and materials within the same chip family, providing whole new layers of scalability.
Still, all is not wine and roses in the land of multicore. Critical technologies needed to deliver the chips' full benefits are still lacking, according to this EE Times piece. Chief among them are parallel programming tools and debugging systems.
The bright side to all this is that the biggest hurdles in multicore development are at the beginning. Once you've designed a tool that can handle two cores, it's not a big jump to four -- which is fortunate, because the experts are already talking about systems with 1,000 or more cores in the near future.