Cloud Benchmarking: Helping the Enterprise Look Before it Leaps

    Slide Show

    Seven Enterprise Applications Trends for High Growth Companies

    The enterprise is under the gun to quickly ramp up its efforts to implement a working cloud infrastructure, if only to bring some semblance of control over what has been so far a user-driven phenomenon.

    But while efforts to convert legacy infrastructure into cloud architectures are ongoing, the need to tap into public cloud resources is growing. And this leads to a problem, because not all cloud services are the same and the drive to develop adequate standards, benchmarks and other means of comparing cloud is still nascent.

    Fortunately, the federal government is on the case, or at least it thinks it should be. At a recent Amazon Web Services conference in Washington, top Health and human services IT honcho Frank Baitman, while praising the work the Amazon has done for the agency, expressed a need for common standards among cloud providers so the government can properly assess the services of each before buying. Such a scheme would do wonders to allow the government to support multiple vendors as it strives to offload as much data and infrastructure as possible onto the cloud.

    It’s a nice idea, but as we’ve seen from past practices, it is very difficult for industry players to cooperate on initiatives aimed to pointing out relative strengths and weaknesses of their platforms. A much better method is to establish independent reviewers who can rate services on an impartial basis, preferably using both professional evaluation and user feedback in their determinations.

    A company called Cedexis, for example, has established the Radar cloud and mobile app performance benchmarking service, which seeks to evaluate functionality across local and distributed content delivery networks. The service features crowd-sourced collaboration techniques with real-time collection and analysis of end-user Web and mobile performance. In this way, members can compare multiple services on a range of parameters, such as load times, configuration change capabilities and SLA performance. Results can be assessed using real-end-user measurements with percentage-based median or standard deviation methods.

    Application management platforms are getting into the benchmarking game, as well. CLiQr Technologies, for example, has added a benchmarking feature to its management stack that allows users to gauge the price-performance variations that exist among commercial cloud providers. According to the company, there can be as much as a five-fold difference in performance between providers for job-based and n-tier applications, such as Hadoop, batch or Web apps. The service provides variables for application types and provides detailed reporting on multiple performance metrics for providers, instance types and supported architectures.

    In this age of self-service applications, of course, it should come as no surprise that there is one for the cloud, as well. Compare the Cloud is an independent comparative Web site that recently added a novel new feature called CloudPitch in which providers get 150 words or less to tell prospective clients why their service is optimal for particular business objectives. Users then have the ability to vote pitches up or down, which ultimately raises or lowers the provider’s profile on the Web site. The site’s backers say this approach not only helps users differentiate various cloud services, but also allows providers to gauge more accurately what kinds of services are seeing the highest demand.

    Like traditional IT benchmarking, the new cloud-facing services aren’t designed to weed out the bad from the good but to more closely match the plethora of available services with the requirements of individual application loads. A service optimized for unstructured data, for example, will probably not be very effective for database queries.

    Cloud providers, like any other vendor, are under pressure to increase revenues, which often leads salespeople to fudge a little when describing their services’ ability to fulfill requirements. With new benchmarking tools at their disposal, enterprises at least have a way to validate claims before services are launched and migration is underway.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles