Network Management in the Virtual Age


It's long been known that rampant server virtualization places an enormous burden on physical I/O infrastructures. Until recently, though, there was no way to determine exactly how data center networks were being impacted, making even the most detailed upgrade plan little more than a shot in the dark.


But a new generation of I/O monitoring and analysis tools optimized for virtual environments offers a window into the network to better guide the deployment and provisioning of both physical and virtual resources.


VMware users will no doubt become familiar with VKernal's Capacity Analyzer as their virtual infrastructures become more mature. The system uses statistics from VMware's VirtualCenter to keep an eye on throughput and overall network performance. It has an advantage over VMware's own Capacity Planner in that it can make predictions as to exactly where and when problems will occur.


Microsoft users will soon have access to Emulex' Performance and Resource Optimization (PRO) package. Emulex is a Microsoft launch partner, so PRO is tied to System Center Virtual Machine Manager 2008, basically extending management capabilities to the Emulex LightPulse adapters. Once a certain I/O threshold has been reached, PRO instructs SCVMM to re-allocate virtual servers or take other appropriate action.


Greater visibility into the SAN is also growing in importance as virtualization and other technologies complicate the once linear relationship between storage and server. Virtual Instruments recently launched the Traffic Analysis Point (TAP) appliance designed to evaluate the performance of Fibre Channel SANs. The device makes copies of SAN traffic and runs analysis and diagnostics on packet header information, turning the data over to the company's NetWisdom SAN management suite for automated troubleshooting. The company expects to release iSCSI and FCoE versions soon.


Backup systems are also in need of a little TLC from time to time. Asigra Inc. has developed the Storage I/O and Data Validation Tool that provides agentless information recovery management to determine which storage systems are best suited for I/O-intensive applications. The system simulates traffic loads from perhaps thousands of remote workstations to gauge the speed between backup device and target. It can then determine the cause of performance degradation, such as overloaded hard drives, improper cache settings or faulty RAID controllers.


The datacenter being the organic creature that it is, changes in server utilization will impact systems all the way through the network. A robust analysis package will at least let you know of impending problems before they occur.