Everybody knew virtualization was going to drive efficiency in the data center. But no one said it was going to be easy.
If anything, virtualization put to rest the notion that data infrastructure was nothing more than a collection of discrete parts, each performing their designated function before passing data down the line. Instead, virtualization ushered in the rise of the organic data center, where changes in one environment can have profound effects on the operation of others.
To counter that, enterprises are turning toward a new generation of predictive analytics solutions that track and monitor the performance of data as a means to assess the health of underlying infrastructure. As both virtualization and the cloud push data platforms onto infrastructure under someone else's control, knowing where and how potential trouble is manifesting itself is the surest way to keep it at bay before users realize something is amiss.
In most cases, predictive analytics works hand in hand with advanced automation - the better to both identify and correct problems on the fly. Virtela, for example, has built out its managed IT infrastructure services platform with automation and analytics tools designed for virtual and cloud environments. The package includes the VirtelaPredict platform that diagnoses potential network and security issues, as well as the VirtelaDiscover system that handles network device and topology optimization. The tools are available along with a range of modular services designed to support multivendor solutions that typically exist in the cloud.
Zyrion Inc. is taking a similar tack with a new predictive analytics module in its Cloud and IT Monitoring software stack. The system is designed to supplement previously released automation and data capture modules that together form the basis of a full management environment for dynamic data environments. The Traverse predictive analytics system utilizes automated baselining and behavior learning tools to build performance profiles of underlying infrastructure components, to both gauge the impact of current and future data loads and to alert operators when performance degrades.
More often than not, one of the most overworked components in the data center will turn out to be the storage controller, according to Storage Switzerland's George Crump. Things were bad enough when it was tasked with managing LUNs and RAID settings for a single server. Add umpteen virtual ones and you have the makings of a classic bottleneck, and that's before adding in a host of new responsibilities like snapshots, thin provisioning and automated tiering. So far, the latencies in the storage controller have been covered by the slow speed of hard disk drives, but the veil is coming off as SSDs make their mark in the enterprise.
You may never be able to fully trust the distributed infrastructure that will form increasingly critical support for your data environments, but at least you'll have the means to gauge its performance, and then shift the data burden around should trouble arise.