It's getting harder to figure out what exactly is going wrong with any given application or system these days thanks largely to the advent of virtualization.
Of course, in an ideal world, all the application workloads running on any given set of virtual servers would be isolated based on their performance requirements. In reality, not too many IT organizations are paying enough attention to the best practices when it comes to virtualization. What they wind up with is a mix of competing application workloads all trying to access the same underlying server and storage resources at the same time, which results in contention that slows down every application workload trying to share that infrastructure.
The folks at Hewlett-Packard have seen enough of this behavior to actually launch an HP Critical Advantage consulting service dedicated to helping customers using Intel-class servers sort all this out. According to Flynn Maloy, director of worldwide marketing for HP Technology Services, HP is seeing a lot more mission-critical applications moved onto virtual servers. But with that shift comes concerns about how to effectively deploy those applications in an IT world that is increasingly virtual.
Because of virtualization, Maloy says that incident management is getting harder across the board because there's a lot less visibility into the underlying IT infrastructure.
HP customers are hardly the only IT organization experiencing this problem. In fact, one could argue that this issue is at the heart of what many refer to now as "virtualization stall," which is usually characterized by a decision to limit virtualization to non-production server environments.
In reality, virtual machines can stand up to the rigors of most mission-critical applications. They just have to be managed like any other IT resource to make sure that the overall system is configured properly from the very beginning.