Market Update: Microsoft Office 365 Is Here

Arthur Cole

Scalability and flexibility are the twin goals in every data center architectural development these days. Virtualization, the cloud, and even old-fashioned upgrades to physical infrastructure all must meet strict new requirements aimed at increasing data loads and productivity while bringing costs down.


One largely overlooked aspect in all this, however, is the server backplane. That seems a little odd considering it is the ability of servers to exchange data with each other and disparate elements both near and far that determines the extent to which enterprises can scale virtual and physical resources.


And it seems the need for high-performance interconnects will only increase as scale-out architectures like Supermicro's new high-density MicroCloud platform become increasingly in vogue. Current MicroCloud configurations provide up to eight hot-pluggable nodes in a 3U chassis, each supporting a single Xeon e3-1200 processors, a PCIe slot and twin GbE ports. Future plans call for expansion up to 32 nodes for highly dense clusters of either low-power or high-performance machines.


A high-performance interconnect across a single cluster is one thing, but what about across remote, cloud-based resources? There has long been talk of extending the PCIe protocol beyond the server rack to act as a de facto network fabric, but not much has come of it. One key exception may be Virtensys, which earlier this year released the VMX5000 switch that allows servers and storage devices to share I/O adapters through their PCIe connections, essentially creating a PCIe-based storage network. The company claims some dramatic cost reductions for things like management, powering and cooling, but it is unclear whether it could be extended into advanced cloud architectures.


Perhaps, then, PCIe isn't the right protocol for the cloud. Critics point out that PCIe does not scale well in large, multiprocessor peer-to-peer environments, something that comes much more naturally to embedded solutions like the RapidIO protocol. What would be needed is a means to seamlessly transfer data from legacy PCIe infrastructure to RapidIO and back again, which is exactly the function of Integrated Device Technology's new Tsi721 bridge device. The company says the unit serves the twin goals of allowing enterprise OEMs to tap into RapidIO as a cloud interconnect, while at the same time giving embedded manufacturers a means to leverage high-performance processors from Intel and AMD. The device provides eight DMA and four messaging channels capable of line speeds up to 16 Gbps, outclassing even 10 GbE in terms of throughput and latency.


No matter how things shake out, however, the interconnect is emerging as a key piece of advanced data architectures. Rather than simply plugging a server into a rack and expecting it to talk to all its peers, IT techs should consider the real mechanics of their backplane and work on ways to improve its performance.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date