Cloud-Ready Systems Rely on Fast Interconnects

Arthur Cole

If I'm starting to sound like a broken record, I apologize. But I just can't help noticing that as new cloud-optimized data center systems hit the channel, the focus of development is shifting away from things like capacity and general processing power and more toward interconnect speed.


This shouldn't come as a surprise, though, because once you're in the cloud, capacity and processing become moot points as they are no longer limited by what's physically available, only by your ability to pay for it. But to maintain that level of flexibility, the actual hardware still has to communicate with each other, which requires wide data pipes in the interconnect and the larger network environment.


This week's two major hardware announcements bear this out. The first is a series of Sun Fire and Sun Blade servers that Sun rolled out under its Open Network Systems initiative and outfitted with the new Xeon 5500s. The Sun Fire X4170, for example, is outfitted with integrated SSDs, kicking throughput through the roof compared to traditional hard drives, while the Sun Blade 6270 and other models sport the company's Virtual NEM system that provides QDR (quad data rate) InfiniBand up to 40 Gbps.


Sun turned to Mellanox for the InfiniBand support on the Sun Blade models. The company offered up its ConnectX adapters and InfiniScale IV switches to the platform. Users can dedicate two of the ConnectX 40G InfiniBand PCI Express Express Modules to each server module, each of which is hot-swappable without having to open the chassis. And longer runs can be provided with power from both InfiniBand ports to support active and passive cable.


The other major announcement this week comes from EMC, which has released a new high-end Symmetrix architecture called the V-Max designed to federate numerous controllers for server clusters reaching into the thousands. A storage system that reaches into the multiple petabyte range certainly is impressive. But it's important to note that to gain that kind of prowess, the company turned to the RapidIO fabric, an interconnect format that came out of the embedded system community but is now set to encroach on InfiniBand's turf. The basic architecture consists of a V-Max engine built on Xeon-based controllers, I/O ports, mirrored memory system and the Enginuity OS. The RapidIO interconnect links these modules at speed ranging from 1 to 60 Gbps.


As I said earlier, capacity and processing power are still vital aspects of data center hardware. But once you get into the cloud, there's really no limit as to how high up these can scale -- provided, of course, that the physical machines providing these capabilities can act in unison.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.