SHARE
Facebook X Pinterest WhatsApp

VM Density: A Question of Balance

Server infrastructure is shrinking in the enterprise. This makes a lot of sense because as more workloads are pushed to the cloud, the need for local resources is diminishing. But it is also a facet of the increased utilization that organizations strive to maintain in the data center, achieved largely by increasing the density of […]

Written By
thumbnail
Arthur Cole
Arthur Cole
May 30, 2017

Server infrastructure is shrinking in the enterprise. This makes a lot of sense because as more workloads are pushed to the cloud, the need for local resources is diminishing.

But it is also a facet of the increased utilization that organizations strive to maintain in the data center, achieved largely by increasing the density of virtual machine per box. This has long been a rather tricky game, however, since upping the activity on the compute side of the house can sometimes result in poorer performance, even to critical workloads that are not virtualized. So as with any type of abstract computing, the enterprise needs to tread carefully when it comes to consolidating workloads onto limited resources.

According to the Uptime Institute, server footprints in the data center continue to shrink even as cloud usage remains flat. As it stands, the local data center carries about two-thirds of the enterprise workload, roughly equal to what it was in 2014. Reports from the field indicate that enterprises are deriving greater performance from servers obtained during recent refresh cycles, and virtualization management stacks are becoming more adept at managing multiple workloads on individual machines. When a server does hit its maximum load or has reached the end of its useful life, the cloud often provides a more convenient offload than provisioning a new machine.

But as Dade County, Fla., network admin Jesus Vigo explained to Biz Tech Magazine, IT managers need to take more into account than just the server’s capabilities when it comes to supporting multiple VMs. There is a tendency to provision servers for peak loads rather than actual utilization trends, which often leads to idle CPU, RAM and other resources. At the same time, a lack of proactive monitoring limits the flexibility with which workloads can be dynamically balanced, which can lead to problems even for servers that have been overprovisioned and prevents resources from re-entering the availability pool after the VM itself has been decommissioned.

One of the more significant limiting factors to higher VM densities is the need to access storage. The initial phase of virtualization in the data center quickly ran into bottlenecks on the storage area network (SAN), which subsequent architectures rectified using flash storage and on-server memory. The latest move in this trend is Non-Volatile Memory Express (NVMe), which supports mesh-style fabrics across server and storage infrastructure that reduces latency and boosts scalability. Companies like Cavium Networks are currently applying NVMe to legacy fabric technologies like Fibre Channel and RDMA to improve resource utilization ratios.

NVMe also holds promise for improving VM density in hyperscale infrastructure environments, where even small gains in efficiency can produce dramatic improvements in performance and power consumption. Pivot3 recently launched the Acuity management stack that brings a policy-based approach to NVMe flash that triples the VM counter per node. Not only does this reduce hardware footprints and lower both opex and capex, it also provides a six-fold speed boost for databases, enterprise apps, analytics and other workloads.

Of course, the enterprise is on a never-ending quest for greater efficiency and better performance, but not if it leads to unacceptable risk to data and applications. Unfortunately, the line between these two outcomes is rather thin, and constantly moving.

IT executives need to take care not to push VM density too far too fast, or they may have to answer to the front office when services go down.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

Recommended for you...

Top Data Lake Solutions for 2022
Aminu Abdullahi
Jul 19, 2022
Top ETL Tools 2022
Collins Ayuya
Jul 14, 2022
Snowflake vs. Databricks: Big Data Platform Comparison
Surajdeep Singh
Jul 14, 2022
Identify Where Your Information Is Vulnerable Using Data Flow Diagrams
Jillian Koskie
Jun 22, 2022
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.