Hiding the I/O Bottleneck

Michael Vizard

The history of computing in the data center could just as easily be defined by the number of times we have shifted the I/O bottleneck between processors, memory, networks and storage as anything else. Right now, with the advent of virtual machines, that bottleneck seems to be somewhere between memory and our storage systems.

In fact, a recent survey conducted by Xsigo, a maker of 10G Ethernet storage virtualization technology, found that half of the 85 surveyed IT managers, most of whom have already deployed virtual machine software, have experienced I/O issues in the last 12 months.

Specifically, 44 percent said their servers were having I/O issues, while 41 percent said they had experienced outages related to I/O issues. Xsigo data suggest that the typical IT organization deploying virtual machine software on a server needs about seven to 16 I/O connections per server because of the need to support more virtual servers on every physical server. The number of virtual servers per physical server can vary, but on older systems memory bottlenecks tend to limit the number of virtual machines that can effectively be deployed on a server. On newer systems, however, IT organizations are able to consolidate as many as 20 virtual servers on each physical server, which naturally puts a lot more pressure on the storage and network I/O requirements.

Obviously, Xsigo and its OEM partners want customers to upgrade their storage systems to accommodate the I/O needs of all these virtual servers. But the question that some people are raising is whether it's time to rethink the entire system design. As an industry, we keep moving forward in fits and starts, so an advance on the processor side is rarely accompanied by a corresponding advance in storage and networking technologies.

But right now it looks like the stars are aligning -- in terms of simultaneous processor, storage and networking technology advances -- so that for the first time in memory IT organizations could really undertake the challenge of deploying truly balanced systems. That may not be economically feasible for everybody, but IT organizations that take the time to start with a fresh piece of paper are going to have a significant advantage over rivals that are going to increasingly spend their days trying to track down performance anomalies created by intermittent I/O issues that are never the same twice, but never really go away.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.