More

    VM Density: A Question of Balance

    Server infrastructure is shrinking in the enterprise. This makes a lot of sense because as more workloads are pushed to the cloud, the need for local resources is diminishing.

    But it is also a facet of the increased utilization that organizations strive to maintain in the data center, achieved largely by increasing the density of virtual machine per box. This has long been a rather tricky game, however, since upping the activity on the compute side of the house can sometimes result in poorer performance, even to critical workloads that are not virtualized. So as with any type of abstract computing, the enterprise needs to tread carefully when it comes to consolidating workloads onto limited resources.

    According to the Uptime Institute, server footprints in the data center continue to shrink even as cloud usage remains flat. As it stands, the local data center carries about two-thirds of the enterprise workload, roughly equal to what it was in 2014. Reports from the field indicate that enterprises are deriving greater performance from servers obtained during recent refresh cycles, and virtualization management stacks are becoming more adept at managing multiple workloads on individual machines. When a server does hit its maximum load or has reached the end of its useful life, the cloud often provides a more convenient offload than provisioning a new machine.

    But as Dade County, Fla., network admin Jesus Vigo explained to Biz Tech Magazine, IT managers need to take more into account than just the server’s capabilities when it comes to supporting multiple VMs. There is a tendency to provision servers for peak loads rather than actual utilization trends, which often leads to idle CPU, RAM and other resources. At the same time, a lack of proactive monitoring limits the flexibility with which workloads can be dynamically balanced, which can lead to problems even for servers that have been overprovisioned and prevents resources from re-entering the availability pool after the VM itself has been decommissioned.

    One of the more significant limiting factors to higher VM densities is the need to access storage. The initial phase of virtualization in the data center quickly ran into bottlenecks on the storage area network (SAN), which subsequent architectures rectified using flash storage and on-server memory. The latest move in this trend is Non-Volatile Memory Express (NVMe), which supports mesh-style fabrics across server and storage infrastructure that reduces latency and boosts scalability. Companies like Cavium Networks are currently applying NVMe to legacy fabric technologies like Fibre Channel and RDMA to improve resource utilization ratios.

    NVMe also holds promise for improving VM density in hyperscale infrastructure environments, where even small gains in efficiency can produce dramatic improvements in performance and power consumption. Pivot3 recently launched the Acuity management stack that brings a policy-based approach to NVMe flash that triples the VM counter per node. Not only does this reduce hardware footprints and lower both opex and capex, it also provides a six-fold speed boost for databases, enterprise apps, analytics and other workloads.

    Of course, the enterprise is on a never-ending quest for greater efficiency and better performance, but not if it leads to unacceptable risk to data and applications. Unfortunately, the line between these two outcomes is rather thin, and constantly moving.

    IT executives need to take care not to push VM density too far too fast, or they may have to answer to the front office when services go down.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles