In the mad rush to move enterprise infrastructure to the cloud, it seems that little thought is being paid to whether or not the transition will result in higher performance.
To be sure, the cloud will provide increased scalability and flexibility when it comes to providing a suitable data environment for key applications and to more closely match data loads with available resources, but at the end of the day, will it make knowledge workers more productive?
No matter how you design it, a distributed architecture always faces increased lag and other performance penalties compared to local systems. This was true when applications were moved from the desktop to the central repository and then compounded when placed on a wide-area basis. Some cloud providers are hoping to compensate by introducing state-of-the-art technologies within their own data centers. Rackspace, for example, has added new Performance Cloud Servers to its line-up, featuring new Intel E5 processors, RAID-10 SSDs, 120GB RAM modules and 10GbE networking, with the idea that faster turnaround within its own infrastructure will push service performance as close to enterprise-quality as possible.
But aside from anecdotal reports from users, is there any way for the enterprise to independently verify actual cloud performance? New generations of cloud analytics are offering deeper insight into the inner workings of top provider platforms, offering not only ways to improve performance but to compare various services against each other. RISC Networks’ Cloud Readiness Analytics system, for example, provides feedback on numerous metrics like application usage, infrastructure dependencies, and I/O workloads in order to identify the correct IaaS configuration issues to suit enterprise needs. In this way, organizations stand to substantially lower service costs by more closely matching available cloud inventory with actual data and application usage.
Ultimately, however, cloud performance comes down to how well it supports the application, which is why companies like Extrahop Networks are re-engineering app performance platforms for the cloud. The company recently launched Extrahop for AWS to focus on wire data that is constantly moving between instances, services and users in Amazon environments. In this way, the enterprise gains a single view into how applications are functioning across on-premise and public cloud architectures. At the same time, it provides deeper insight into metrics like processing time and network latency than the standard measurements provided by tools like Amazon CloudWatch.
Meanwhile, a start-up called CopperEgg has hit the channel with an application performance monitoring (APM) system that is said to gauge the actual user experience across distributed cloud environments. The Real User Monitoring (RUM) solution for cloud-based applications aims for full visibility across servers, web sites, applications and end users as they encounter public, private and hybrid infrastructure. The SaaS-based system provides AJAX monitoring and APDEX configuration and reporting capabilities, as well as a self-adapting architecture that automatically adjusts to changes in operating environments, even in highly dynamic, burst-style cloud deployments. As well, it allows for a wide range of third-party monitoring and analytics to incorporate tools like Chef, Puppet and GitHub.
Establishing a presence on the cloud should be a high priority for any enterprise, but not if it means IT is blind to the workings of substantial portions of the data environment. Cloud providers have every reason in the world to ensure that they are delivering rock-solid performance, but it is up to the enterprise to make certain they are getting all that they paid for.