All eyes are turning toward the Intel Developer Forum in San Francisco this week, so what better time for AMD to introduce a major upgrade to the HyperTransport interconnect?
It is, after all, the one area in which AMD can claim bragging rights, having beaten Intel to market with a technology that in many respects is more elegant and efficient than the eventual QuickPath system.
This week's news is actually from the HyperTransport Technology Consortium, a group that counts Apple, Cisco, Broadcom, Sun and others as members dedicated to furthering HyperTransport development. To that end, the group has advanced the technology in two key areas.
First is the new 3.1 specification, which increases the maximum clock speed to 3.2 GHz from the old 2.6 GHz. When combined with the system's double data rate (DDR) capabilities, this provides upwards of 6.4 gigatransfers per second (GTps) and an aggregate throughput of 51.6 GBps. That should significantly enhance the performance of a wide range of chip-to-chip, board-to-board and chassis-to-chassis communications systems.
The other development is to the HTX expansion connector aimed at tying add-in card susbsystems to Extended ATX (EATX) motherboards. The new HTX3 spec triples bandwidth to 5.2 GTps and supports link-splitting technology to allow a single x16 link to be split into two 8x links over a single connector. Look to hardware vendors to use the new connection in new clustering designs and remote access systems.
To counter all this, Intel is slated to pull the covers off the Nehalem architecture this week, soon to be rebranded as the Core i7 line. This device will likely be outfitted with a QuickPath component capable of hitting 6.4 GTps, but since it is likely to use a 25 GBps bandwidth architecture, it's unclear whether this represents a sustained or peak throughput.
This isn't just a two-way struggle, however, as both interconnects, and particularly the HTX3 connector, are vying to supplant the PCI bus architecture as the premiere means of shuttling data from system to system. The PCI Special Interest Group just unveiled the PCI Express 2.0 standard, which provides up to 5 GTps and 16 GBps, and is already preparing the PCIe 3.0 standard that should bump that to 8 GTps and 32 GBps. Look for that some time in late 2009.
From multicores to server clusters, the days of relying on a single device to handle increasing computational requirements are quickly drawing to a close. Going forward, all the action will center on getting single devices to work together in tandem. And that means the most efficient and effective means of shuttling data across components -- whether on the chip level, board level or higher -- will be the winner.