Good Virtualization Citizenship

Michael Vizard

Not too long ago, during the early days of client/server computing when application performance was so dependent on the network, we used to talk about whether an application was a good network citizen. If too much code ran on one side of the network or the other, chances are there would be an imbalance that would put too much pressure on the network. Applications that didn't take that into account were referred to as bad network citizens.

With virtualization today, we hear a lot of poor application performance. What nobody is really 100 percent sure about yet is whether this is because of the I/O overhead generated by the virtual machines, or are the applications themselves just bad virtualization citizens.

Barry Zane, CTO of ParAccel, a provider of a massively parallel processing database, believes that poor virtual machine performance has a lot more to do with the way the application is built than it does with the virtual machines themselves. To back up that perspective, ParAccel today announced that it has achieved the fastest 1 TB TPC benchmark ever running is massively parallel database on top of VMware vSphere 4. That new benchmark record, the company said, attains database load times that are 8.7 times faster than the previous record holder using 37 percent fewer servers. Specifically, an 80-node cluster of ParAccel's PADB version 2.5 consolidated on 40 DL380 servers from Hewlett-Packard running VMware vSphere 4 achieved 1,316,882 Composite Queries per Hour (QphH) at 1,000 GB, a price/performance of U.S. $.70/QphH.

Now the value of a benchmark is in the eye of the beholder. But Zane notes that his company's database runs faster on top of virtual machine software that it does on some physical servers running some forms of Linux. He attributes that to the fact that VMware, for example, does a much better job managing I/O throughput than some of the work that has gone into Linux.

Whatever the reason, Zane says that when it comes to virtualization, the real name of the game to is avoid swapping memory. Virtual machines are obviously hungry for memory, but if properly managed they don't present a performance issue. The key phrase there is "properly managed," because what virtual machines will do is expose a poorly designed application pretty quickly.



Add Comment      Leave a comment on this blog post
Oct 29, 2010 11:10 PM Luke Vorster Luke Vorster  says:
In my humble opinion, we still haven't overcome the software crisis from the tail end of the previous millenium... It doesn't matter what machine you use - it's how you use it. There are many types of machine, and many types of language (and programming paradigms), because there is simply no silver bullet. The complexity of internet apps, HPC apps, and 'infinitely scalable' cloud-based apps is the real problem. We have the vehicle, but no idea (really) how it differs from its predecessors. We don't even know if it is, in fact, any different! I am a fan of the latter notion: and, as far as I can tell, interconnects and data busses (I/O) are always behind processing power... the degradation in performance of mapping an app to the wrong machine can be drastic! I am not so sure that sustaining 'continuous growth' is something we can only obtain via clouds and grids within our organisations. I believe it is the inter-operability of all computation platforms, and the case-by-case selection as the environment changes. "applications will most likely run faster on modern virtual machine infrastructure than they do natively on legacy operating systems"... Maybe, but we are surely not talking about systems that have HPC requirements. virtual machines offer a good general solution for most codes, so a large set of apps will run faster even if they have been badly thought out... There is so much work still to be done before virtual technology can 'proxy' the characteristics and techniques of all high performance technology.... I mean, take the Connection Machine, or the GPGPU, or the Cell processor, or the FPGA, for example. . . I will change my mind iff a virtual machine system, or cloud platform, actually make it to the Top500 in a competitive sense. I foresee a trend of accelerating virtual technologies with 'peripheral' HPC systems... i.e. the cloud is a machine, so we should treat it like one - optimise where we have to and can afford to. There is no way anyone can afford to optimise every aspect of a computation machine for all cases. The result being much the same as time-sharing system from decades ago - they end up not being so universally sharable by all. So the back-lash is to own our own resources,,, and round and round we go as per... Mission critical is not a generally solvable space - the bar raises as we can do more - it will always be out of scope of general solutions, virtualisation included. Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.