A few weeks ago I wrote about the growing urgency for new parallel programming models that will allow applications to truly take advantage of multicore processing, and the slow pace of progress being made to bring it about.
Well, the latest research on that subject isn't likely to allow either chip manufacturers or software developers to sleep any better. At last week's Multicore Expo, a show largely devoted to the embedded market, Venture Development Corp. reported that more than half of embedded system developers will be using multicores by year-end, but only 6 percent of software vendors are targeting parallel processing, with an expected rise to only 40 percent by 2011. The PC market is already facing dire straits, with multicores making up about 40 percent of all Intel processors today, a figure that will rise to 95 percent by 2011.
One of the main problems for multicore development is the fact that languages like C and C++ don't lend themselves to parallel processing very well. But since an entirely new programming language isn't likely to appear in the next few months (or maybe years), the search is on for some kind of workaround.
National Instruments says it may have the answer in its LabVIEW graphical programming language, said to ease multicore programming hassles through automatic mapping onto multiple threads. LabVIEW features an execution system that dynamically scales the number of threads based on the number of processors available, either on the desktop or embedded system. And since it's a graphical model, it can incorporate legacy C and C++ code. The company is sponsoring, along with Intel, a series of workshops on the system across the U.S. and Canada.
But probably the most significant initiative to take on this problem is the Universal Parallel Computing Research Center, split between the University of California, Berkeley, and the University of Illinois, Urbana-Champaign, funded in large part by Intel and Microsoft. TG Daily has a good overview of the project by Rajesh Karmani, a UI grad student, who argues that one hopeful area of study is the "actor model of programming" that uses asynchronous messaging technology to communicate between concurrently executing objects.
While messaging technology has served parallel programming needs in high-performance models in the past, it might not work out very well for standard multicore environments, says CalTech Ph.D Wayne Pfeiffer. Messaging may work well outside the shared memory unit of a multicore, but within that memory, a more transactional approach will be necessary. The trick will be uniting the internal transactional system and the external messaging system under a single parallel model.
It's a shame that the chip industry and the programming industry took so long to address this problem and are now having to scramble for a solution. But the good news is that some of the best minds in both businesses recognize the seriousness of the issue.