Parallel Processing Takes Center Stage

Arthur Cole

Virtualization, cloud computing and other headline-grabbing developments are set to vastly remake the enterprise in the next few years. But there is one technology that trumps them all, although it has gotten relatively little attention to date.

 

Parallel processing has been slowly dawning on the IT consciousness since the advent of multicore silicon. But only lately has the realization sunk in that the computer industry is about to undergo a radical shift in the very nature of CPU-based processing itself.

 

HPCWire's Michael Feldman lays out a good description of the likely permutations here. Once you get the notion that the future will be dominated by general-purpose multicores and data parallel engines, then all the recent moves by the chip companies start to make sense: AMD's purchase of ATI, Intel's interest in the Cell processor and high-end visual systems -- even Nvidia's new focus on HPC technology.

 

Sun Microsystems is also making a play, coming up with a new transactional memory process for its Rock Processor. The Hybrid Transaction Memory (HTM) system is said to provide an easier development environment than the multi-threading approaches being investigated by Microsoft and Intel.

 

And speaking of Microsoft, could it be that parallel processing holds the key to the company's forays into the cloud? ZDNet's Mary Jo Foley certainly thinks so, having read up on the forthcoming SCOPE (Structure Computations Optimized for Parallel Execution) language. SCOPE is expected to play a major role on Microsoft's back-end cloud storage system, Cosmos. And there's also talk of a parallel approach to the Midori project, the expected replacement for Windows.


 

The parallel angle also puts a new spin on Microsoft's recent purchase of data warehouse company DATAllegro. It seems that DATAllegro's platform offers massively parallel processing (MPP) capability, matching only market-leader Teradata and far out-classing the offerings of rivals such as Oracle. What's more, DATAllegro's brand of MPP is supported by standard Intel and AMD multicore technology.

 

How much impact will this have on the enterprise? Well, that depends. The move to parallel architectures is likely to be quite costly and time-consuming, requiring skills that will be difficult to master and very expensive. On the other hand, if parallelism is going to be part and parcel of the cloud anyway, it just might make more sense to offload your resources there and let your provider worry about it.



Add Comment      Leave a comment on this blog post
Aug 5, 2008 12:48 PM Darryl McDonald Darryl McDonald  says:
It's nice to see a flurry of new interest in parallel processing, something Teradata has built into the database since its inception. Thanks for recognizing Teradata's leadership, but really, DATAllegro and others don't come close to our architecture. Some others try adding a parallel layer on top of a transactional database. And, Teradata's "brand of MPP" has been standard Intel technology since our inception as well, now Intel multi-core and running on Linux or Windows. The difference is that we can leverage the same parallelism inside a multi-core Intel server as well as across Intel servers in an MPP architecture. My new blog talks about the importance of substantiating vendor claims -- I hope you'll take a look at the link below.Darryl McDonaldChief Marketing Officer, Teradata Corporationhttp://www.teradata.com/darryl/ Reply
Aug 18, 2008 5:43 AM Jeff Wells Jeff Wells  says:
From my perspective the computing industry is at a massive inflection point as it comes to terms with parallel processing. The problem is that the industry has benefited from 20 years of terrific CPU development and common standards. Current code and coding methods are highly developed but entirely sequential. And so, taking advantage of parallelism requires a paradigm shift in development. Coders must be able to carry out many functions in parallel. They must break down tasks into finer grains and closely study how to implement algorithms in this entirely new parallel style.At Exegy we're using FPGAs to process market data from stock and commodities markets much faster than is possible on standard general purpose platforms. But it is very clear that the secret sauce is in the relationship between software and hardware as opposed to a robotic utilization of hardware.We've also managed to get extraordinarily high throughput numbers on the Exegy machines and this is proving compelling to customers who are concerned about phenomenal growth rates in market data (100% per year).Since the technology is changing so fast, we've also come up with a different business model. Our hardware and software is not for sale. Instead we're leasing servers to customers, complete with hardware upgrades and remote management.This is proving appealing to the big Wall Street firms. Forward-thinking IT staff have figured hardware acceleration is the way to go, and they are applying the technology to a pain point, namely market data. I'm sure that the hardware acceleration/parallelism approach will be extended to many more domains; but the standard business model for many firms will likely have to change to effectively use this class of solutions. It's just the beginning of the beginning. Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date