With increasing pressures on today’s IT professionals to minimize resources while maintaining a high-performing infrastructure, virtualization has become a go-to option for a low-cost, power-saving solutions. In fact, CIO Insight recently reported that 70 percent of senior executives said virtualization had a significant impact on efficiency and cost savings for their organization. However, deciding what within your network infrastructure to virtualize, and how to actually do it, can be a challenge.
The freedom provided by virtualization is undeniable, but even in an environment less constricted by hardware, it is still critical to keep an eye on resources. The cost benefits of virtualization are neutralized if loads aren’t correctly balanced across virtual machines and if applications are not optimized to run on them. Additionally, virtualization is not a catch-all. There are many different types of application workloads that are not a good fit for virtualization and should remain on dedicated servers.
In this slideshow, Amanda Karkula, Paessler, has identified five dos and don’ts to consider when virtualizing your systems.
Virtualization Tips and Best Practices
Click through for five dos and don’ts to consider when virtualizing your systems, as identified by Amanda Karkula, Paessler.
Do Plan Your Virtualization Based on the Facts
Before you start your virtualization project, evaluate your various applications’ resource usage. Virtualizing systems without knowing their standard CPU/memory load, disk usage and network usage can lead to poor network performance and wasted resources. You want to ensure that there are not too many virtual machines running on a single host, resulting in poor performance, or too few virtual machines running on a host, which could result in wasted resources spent on unnecessary host servers.
Don’t Think of Virtualization as One-Size-Fits-All
Sure, virtualization can be a cost-saver, but not all of your applications are good candidates for a virtual environment. For example, applications with heavy compute or data read/write loads are not good candidates for hypervisor virtualization. In order to identify which of your applications should remain on dedicated servers, and which can be moved to virtualized servers, you need to look at volume and character of transmissions to and from each of your applications.
Do Know Your Network’s Status
A highly virtualized environment lives or dies on the efficiency and dependability of its data network. Issues with your virtual machines (VMs) can originate from a host hardware failure or an issue with the operating systems. Set up sensors to monitor your VM host servers and operating systems to alert you when the status of either is not “normal,” so that you can minimize the impact of issues – like the failure of a Windows server – before they become critical to network and application availability.
Don’t Fail to Establish a Baseline for Traffic Patterns
Virtualized environments cannot tolerate network overloads or switch failures. Ongoing, comprehensive traffic analysis will provide long-term usage projections and help you anticipate traffic increases and the need to enhance resources before they impact service levels. Establishing baselines is critical to analyzing the health and success of any virtualization project.
Do Include Your VMs in Your Unified Monitoring Practice
Once you’ve moved your applications to virtual servers, you need to expand your overall network monitoring beyond physical devices to the virtual machines and the services and applications that are on those. Proactively monitor the performance of your virtualized infrastructure as part of a unified view of your workloads across both your physical and virtual tiers including applications, storage, operating systems, network, etc.