The cloud is cheaper and more flexible than traditional data center infrastructure, but does it provide better performance?
That question is still a little murky, given the myriad ways in which application performance and the overall user experience can be measured, plus there is the fact that it is very difficult for the enterprise to truly gauge what is going on within third-party clouds.
This is more than just an academic exercise at this point, considering the increasing reliance being placed on the cloud for higher-level applications – the kind that drive worker, and by extension business, productivity. A recent survey by ExtraHop Networks indicates that despite high awareness among enterprise managers of the need for extensive visibility into the cloud, nearly 70 percent said they not only lacked the ability at the moment but didn’t even know how it could be done. At best, most organizations rely on service providers to monitor performance, even though most draw the line at resource utilization rather than application flows and transaction rates.
This could prove to be a serious flaw in many cloud architectures because if the cloud has anything in abundance, it’s resources – so the fact that utilization may be low does not guarantee that applications are performing at their peak. According to Compuware, nearly two-thirds of CIOs highlight poor end-user experience and the potential for hidden costs among their top cloud management concerns. Both of these issues point to one of the most serious disconnects between cloud myth and cloud reality: that the cloud is merely an extension of the data center and therefore should be managed and monitored in the same way.
In the data center, application performance is almost entirely dependent on infrastructure performance: Keep an eye on server, storage and networking architectures and applications should fall into place. In the cloud, where much of the infrastructure is beyond the enterprise’s control, management and visibility must shift entirely to the application. With the enterprise as the IT customer rather than the IT provider, the only thing that matters is results.
Already, the entrepreneurial spirit is starting to catch on to this new reality. Start-ups like AppEnsure are staking their claims in cloud management circles by offering broad visibility into the data environment wherever it roams. AppEnsure keeps track of key metrics like response time and throughput performance across physical, virtual, public and private infrastructure, with automated baselining and root cause analysis features supplementing real-time monitoring and management tools to provide quick resolution of performance issues for entire fleets of production applications.
One of the newest entrants to this field is a company called ThousandEyes. The company’s platform keeps track of routing functions and other performance metrics through the Border Gateway Protocol (BGP), offering deep path analysis, as well as latency and bandwidth management to reduce the enterprise’s reliance on the cloud provider to assess the health of the application environment. It also contains a data-sharing tool that allows performance results to be shared among multiple users as a means to identify and resolve problems quicker.
As long as the enterprise has a stake in the overall health of the data environment, its management responsibilities will remain. But that isn’t to say they won’t change over time, particularly now that more and more of the data infrastructure at its disposal is owned and operated by someone else.
The key question, though, is whether application performance management will prove any less a burden than traditional infrastructure management, and on that score the jury is a long way from being out. It’s nice to think that the cloud will produce a smooth-running, eminently flexible data environment in which troublesome issues can be quickly pushed aside, but it probably won’t be that simple.