A Server Backplane Worthy of the Cloud?

Arthur Cole

Scalability and flexibility are the twin goals in every data center architectural development these days. Virtualization, the cloud, and even old-fashioned upgrades to physical infrastructure all must meet strict new requirements aimed at increasing data loads and productivity while bringing costs down.


One largely overlooked aspect in all this, however, is the server backplane. That seems a little odd considering it is the ability of servers to exchange data with each other and disparate elements both near and far that determines the extent to which enterprises can scale virtual and physical resources.


And it seems the need for high-performance interconnects will only increase as scale-out architectures like Supermicro's new high-density MicroCloud platform become increasingly in vogue. Current MicroCloud configurations provide up to eight hot-pluggable nodes in a 3U chassis, each supporting a single Xeon e3-1200 processors, a PCIe slot and twin GbE ports. Future plans call for expansion up to 32 nodes for highly dense clusters of either low-power or high-performance machines.


A high-performance interconnect across a single cluster is one thing, but what about across remote, cloud-based resources? There has long been talk of extending the PCIe protocol beyond the server rack to act as a de facto network fabric, but not much has come of it. One key exception may be Virtensys, which earlier this year released the VMX5000 switch that allows servers and storage devices to share I/O adapters through their PCIe connections, essentially creating a PCIe-based storage network. The company claims some dramatic cost reductions for things like management, powering and cooling, but it is unclear whether it could be extended into advanced cloud architectures.


Perhaps, then, PCIe isn't the right protocol for the cloud. Critics point out that PCIe does not scale well in large, multiprocessor peer-to-peer environments, something that comes much more naturally to embedded solutions like the RapidIO protocol. What would be needed is a means to seamlessly transfer data from legacy PCIe infrastructure to RapidIO and back again, which is exactly the function of Integrated Device Technology's new Tsi721 bridge device. The company says the unit serves the twin goals of allowing enterprise OEMs to tap into RapidIO as a cloud interconnect, while at the same time giving embedded manufacturers a means to leverage high-performance processors from Intel and AMD. The device provides eight DMA and four messaging channels capable of line speeds up to 16 Gbps, outclassing even 10 GbE in terms of throughput and latency.


No matter how things shake out, however, the interconnect is emerging as a key piece of advanced data architectures. Rather than simply plugging a server into a rack and expecting it to talk to all its peers, IT techs should consider the real mechanics of their backplane and work on ways to improve its performance.



Add Comment      Leave a comment on this blog post
Jun 4, 2011 10:00 AM Stephen Spellicy Stephen Spellicy  says:

Arthur, you are correct, PCIe is really considered an "intra" data center technology and much less about extending for remote connectivity.  What we do at Virtensys is we reduce the physical layer connectivity (within a rack or across racks) using PCIe extension and virtualize traditional network/SAN controller peripherals. 

For instance, our VIO-4001 I/O Virtualization Appliances enable server administrators to reduce the physical layer complexity of deploying 10 GbE and 8 Gbps FC connectivity to their standard rack mount servers.  We reduce physical cabling 4:1 and make it easy to deploy to servers for on-going I/O management. 

Furthermore we simplify the logical configuration of NIC/HBAs and RAID controllers, so adding these types of I/O adapters can be done in a few clicks.  We also give admins powerful controls to ensure QoS (for instance on 10 GbE) to allow them to guarantee bandwidth resources for their critical application servers and hypervisor hosts.    

How I view "cloud enabling" technologies, like Virtensys -- these tools/platforms "speed the time to deploy" services, make the job easier and essentially remove the traditional complexity of managing the physical data center resource. 

If you believe that "cloud" services shouldn't be dependent upon the underlying physical resources that host the services that they provide (and that cloud can leverage standard commodity components like x86 servers, various storage platforms), then technology such as Virtensys is key to the architecture, as it abstracts the underlying server from the type of I/O resource you provide to it, much like server virtualization removes the dependency on physical server hardware (make/model), etc.  Transparency is key of course. 

From a standardization perspective, every enterprise class server made today has PCIe on the motherboard, some server OEMs like Super Micro have gone so far to extend to the back of their server chassis.  For the foreseeable future, network and SAN controller vendors will make I/O peripherals that leverage PCIe, at Virtensys enable the use of these I/O resources easier and reduce the challenges related to costs/complexity of deploying them in the data center -- IMHO its a win for admins and a bigger win for the bottom line of managing a data center. 

Reply
Jun 4, 2011 10:01 AM Tom Cox Tom Cox  says:

The Secret  is Out, this well established open, standard, international interconnect is used in wireless base stations, medical and military applications, and now will be widely deployed in the high performance servers of the Cloud.  The competitive pressure for improved server performance that has low latency and true peer to peer communications.  RapidIO is today powering the highest performance electronics in the industry today. It is a natural extension of PCI, PCIe.

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date