How Many VMs Is Too Many?

Arthur Cole

Server consolidation is usually job one after a virtualization layer has been installed. Most vendors will spend a great deal of time explaining the management and flexibility benefits of their platform, but let's face it, cutting down the hardware budget is likely to remain the primary driver for some time.


But now that more and more enterprises are getting virtual experience under their belts, some are starting to question how much consolidation is enough.


While there is no rule of thumb as to how many virtual machines (VMs) can be housed on a typical server, it doesn't appear as if many of you feel you are pushing the envelope just yet. According to a recent survey from Forrester, about two-thirds of IT professionals say they want to increase the number of VMs per server, despite the fear that those VMs might not be adequately isolated from one another, causing application issues on one to affect its neighbors.


Cross-contamination of VMs might be the least of your worries if densities get too high, however. The primary concern is throughput. The virtual I/O industry has made quite a mark for itself with the ability to subdivide physical I/O resources into virtual ones, eliminating the bottleneck that occurs when too many VMs vie for a single HBA. And high-speed networking protocols like 10 GbE are helping virtualization platforms live up to their promises of 20 or more VMs per server.


The question, though, is whether that number is too high.


Greater efficiency and consolidation are fine under normal operating conditions, but as Burton Group analyst Chris Wolf points out to Virtualization Review, suppose the physical server goes down? Now, instead of two or three applications out of commission, you have 20. Not only are those users out of luck, but your VM availability is diminished as well.


There is also a misnomer about consolidation being a quick and easy process. As Vizioncore's George Pradel explains to Computerworld, shunting workloads from physical to virtual environments requires a lot more coordination than meets the eye. And more often than not, there is a substantial amount of downtime involved.


It's quite possible that, ultimately, managing virtual environments will be more of an art than a science. That's good for IT because it will place a high value on experience and judgment, rather than technical skills on one platform or another.


I'd be interested to hear from readers, though. How many VMs are you currently running per server, and how many do you think is appropriate?



Add Comment      Leave a comment on this blog post
Aug 3, 2009 11:53 AM Jeff Malczewski Jeff Malczewski  says:

Currently I'm running 12 VMs per box.  The last job I was at ran ~18 per box.

Reply
Aug 3, 2009 11:58 AM Arthur Cole Arthur Cole  says: in response to Jeff Malczewski

Did you run into any performance issues? How did you handle I/O?

Reply
Aug 3, 2009 12:08 PM Jeff Malczewski Jeff Malczewski  says: in response to Arthur Cole

No, no performance issues. The boxes still had room for growth. They were IBM 3850's with 4 dual-core proc's each, 32GB RAM, 10 NICs (4 + 4 + 2), and 4 2GB FC SAN connections (2 + 2).  The SAN volumes were RAID 10 arrays, and the servers booted from local U320 SCSI disk.

Reply
Aug 4, 2009 2:09 AM Arthur Cole Arthur Cole  says: in response to Jeff Malczewski

Interesting. Given that, what would you say is the upper limit? I hear some vendors talking about 50 or more VMs per box.

Reply
Aug 4, 2009 2:15 AM Jeff Malczewski Jeff Malczewski  says: in response to Arthur Cole

It depends on your hardware, how many processors you have per box, how much memory you have per box, what your chared storage is, and what the VMs are doing.  If you have CPU-intensive applications your bottleneck will be CPU time.  If you have I/O-intensive applications, your bottleneck will be there.  IMHO, there is no "hard number" as to what that upper limit is, it will be different for each and every environment you step into.  The key is having the knowledge and experience to be able to recognize possible performance issues before they impact production, and to have the resources that enable you to take proactive measures to prevent them.

Reply
Aug 6, 2009 11:20 AM Carter Carter  says: in response to Jeff Malczewski

Agreed.  We have a Dell Bladecenter with 6 blades, each running 20-30 vm's.  We also have a Dell R900 (128GB RAM, 4 six-core procs) that currently has 64 vm's running smoothly.

Reply
Aug 7, 2009 11:39 AM DJ-MAN DJ-MAN  says: in response to Carter

We are currently running an average of 30 VM's. We are adding memory to 48GB as this appears the sweet spot for our IO loads and we anticipate up to 40-50 VM's per host. As for what's the right amount of VM's, that is a moving target.  My environment does not have a high IO load, therefore, we can have more VM's per host. Also, we are running an N+1 model for our environment. This allows for a full host failure.  Additionally, our storage infrastructure is a bit of a challenge for IO balance.   We target 75-85% CPU usage on each box with peaks no more than 90%. We added memory to target this sweet spot since we were not CPU constrained. As you can see this is an art, not a science as was stated previously. Just like our physical servers, they are like balloons. Crimp one end and the bulge happens on the other.

Reply
Aug 7, 2009 11:44 AM DJ-MAN DJ-MAN  says: in response to DJ-MAN

We are running DL380's G5's 3.0Ghz, 48GB RAM per host over NFS to NetApp.

Reply
Aug 19, 2009 2:38 AM fish6288 fish6288  says: in response to DF

That server itself should be able to run a good many VMs on it based on the Proc specs and the Memory Specs. The biggest thing to keep an eye one that a lot of people forget is the IO your backend will handle. Many servers now days can handle the CPU and Memeory load of a ton of VMs running on them...but can your backed data store handle the high IO of all those VMs to keep running smoothe?

Each environment will be different as each enviroment has different servers with different duites. These duties can range from very low usage vms to very High usage vms. Your 3850 could probably handle 50 Low end VMs with no problem...but if you throw in some High end  servers then that number will go down. Ther is no magic number, this number will differ from environment to environment. In my opinion the most important thing is not the Power of your front end Host servers but the power and speed of your backed disk having to handle the IO. It will be very easy to purchase very powerful front end Host boxes and end up having too much power for your disk arrays to handle for smoothe operations.

Also, like mentioned in this article...if you load 25 VMs on a single Host machine...and do not have another host or two setup for High Availability...then when that ESX Host goes down you just lost 25 Servers at one time. vCenter is a great investment for High Availability if your budget will allow. Its probably better to buy 2 good front-end servers to split the load so you can still have one to vmotion machines to incase the other host fails. 2 servers are better than 1...I learned that a long time ago the hard way.

What kind of Backend storage will you be using? What kind of Raid layout and how many disks are in that Raid layout? SATA, SCSI disks?

Reply
Aug 19, 2009 8:47 AM DF DF  says: in response to DJ-MAN

All

just a comment, i am purchasing an IBM 3850 M2 Quad Proc, 6 Core with 128GB RAM, i am being told to use 25 VM's per 3850 which i believe is seriously underrated, first time using VM but would you have a "guess-tamate" on what would be comfortable

Thanks - DF

Reply
Aug 20, 2009 3:37 AM DF DF  says: in response to fish6288

Hi Fish6288

Thanks for the response, i am getting 5 of the mentioned 3850's (3 at the main site and 2 at disaster recovery..... these will be used for more than just DR) and 2 x DS5100 SAN with 15x750GB FC Drives attached to each SAN. So i am hoping i've spec'd a decent balance and the SAN will cope with the I/O

I will definately investigate vCentre i want to ensure i can move host comfortably between servers when patching etc

Reply
Aug 20, 2009 7:53 AM fish6288 fish6288  says: in response to DF

That sounds good. Do check out vCenter. It is a great tool. You will need it for vmotion and storage vmotion. I have two data centers that i can move vms back and forth too using this for Host updates and Host failures. vCenter is a must have if you want High Availability for your vm envrionment...that is if you are using VMWare vSphere.

Reply
Aug 20, 2009 9:12 AM Jeff Malczewski Jeff Malczewski  says: in response to DF

Ahh, DF, you might want to rethink that SAN, for serveral reasons.

The 5100 is overkill for sharing between a few ESX hosts.  Something from the 4xxx line or even a 3300 would likely be fine.

The other reason to rethink that, is that I highly doubt you are getting FC drives.  I just looked around real fast, and I can't find an IBM FC drive that size, however, I CAN find a 750GB SATA drive, and the 5100 will accept it.  HUUUUUUUUUUGE difference in performance, and you will not be at all happy with the difference, trust me.  My last job was a pair of DS4300's with FC drives, and this place is LeftHand iSCSI with SATA drives, and it's night and day.  SATA = BAD.

Reply
Aug 21, 2009 1:09 AM SeanK SeanK  says:

The question is more detailed than "how many" and there is not one standard answer.  The answer depends on application performance and workload throughout the infrastructure -- virtual server, physical server, SAN and storage.  Maybe it is only 5 to 1 with heavy workloads or up to 75 to 1 for file and print servers.  Users are really just guessing unless they can see the actual performance of the infrastructure resources.  We use Akorri BalancePoint to truly optimize our VM as well as our physical infrastructure.  Without it, we would never maximize the ROI on our virtualization investment.

Reply
Aug 21, 2009 3:28 AM DF DF  says: in response to Jeff Malczewski

Thanks Fish and User

And User you are spot on (given misleading info!!) turns out we are getting 450GB FC drives for the SAN and then 1TB SATA's for a different project so the SAN will be used for more than just VM, the reason for the 5100 is we got it for the same price of the 4800's we just purchased :-/

Hopefully coming in week after next so it should be an interesting couple of weeks, thanks for your feedbacks

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.