Service Level Agreements
One of the first steps for choosing a cloud service provider consists of evaluating the level of service offered and the guarantees behind that service. That information is contained in an service level agreement (SLA). Evaluating SLAs can be uncomfortable for many IT managers; after all, most SLAs are filled with legalese and contractual language that can make it difficult to quantify what exactly a vendor is offering.
Further complicating things is that most SLAs are written to protect the vendor, and not so much the customer. Most vendors create SLAs as a shield against litigation, while offering customers minimal assurances. That said, SLAs still can be a powerful tool for IT managers looking to choose a cloud vendor and arrange for the best services.
IT managers need to focus on three areas with SLAs: Data protection, continuity and costs. Arguably, data protection is the most important element to understand. IT managers need to ensure there are clear definitions of who has access to the data and what protections are in place. At first blush, determining levels of protection seems rather straightforward, but there are some hidden issues to be aware of, and IT managers must perform due diligence and address those issues.
It all comes down to who ultimately has control of the customer's proprietary data and how that data is accessed. SLAs should outline the vendor's infrastructure and what services are used to provide persistent access to needed applications and data sets. No vendor will guarantee access 100 percent of the time, simply because there are issues beyond their control and some maintenance chores will require downtime. At best, most service providers offer an assurance of 99.5 percent uptime. Even so, IT managers will need to ask, 'What happens if service is interrupted?'
When evaluating an SLA, it is helpful to ask these questions:
For outages ask:
For security ask:
When it comes to costs, ask:
Getting the answers to these questions should simplify choosing a cloud services provider and should set the stage for expectations and costs surrounding the selected service.
Measurement and performance
To validate the answers to these questions and to demonstrate that goals are met requires monitoring and measuring the solution's performance and its impact on business processes. While it might be difficult to determine the efficiency of the service because of the human element, measuring overall performance proves to be rather straightforward with the tools and services readily available to the modern enterprise.
IT managers can turn to the Keynote Internet Testing Environment (KITE) and Internet Health Report to measure performance. Keynote maintains more than 3,000 servers and PCs at 200 sites in 59 countries, which are used to monitor real-world Internet performance for the Internet Health Report. The performance metrics are based on actual traffic on the Web. Administrators are able to use Keynote's services to monitor uptime, latency and packet loss.
More specific information is available from KITE, a desktop application that can be used to monitor the performance of individual Web sites. Combining the information from the Internet Health Report with the metrics uncovered by KITE should give a good indication of the performance offered by a hosted service provider and should help to pinpoint any bottlenecks. Keynote offers most of its services at no charge, aiming to entice users to try the company's more advanced, pay-for-play products.
Of course, Keynote is not the only game in town, but finding other players will require narrowing down which performance metrics to monitor. For example, Dynatrace offers a suite of cloud performance-monitoring tools, but the company's tool set is aimed squarely at the Java and .NET crowd and is used to measure application performance. Another testing option comes from CapCal, which offers tools that simulate user access to cloud-based applications to measure performance under varying loads. CapCal offers a 20-user stress-test service for free, while more advanced tests come at a price.
Of course, administrators can develop their own monitoring and testing tools or rely on the tools provided by the cloud services vendor. For example, most administrators will find testing cloud-based storage providers relatively straightforward, simply by creating batch files that measure the speed of file uploads and downloads.
The trick to testing a cloud services provider all comes down to knowing what to measure. For most people, that can be defined as packet transmission speed, packet loss and response latency-all of which can be determined by using tools that monitor traffic. In some cases, especially when voice or video data is involved, administrators may have to measure elements such as jitter, frame rates and throughput.
Ideally, most administrators will turn to a combination of several tools to measure performance for their specific cloud implementation. The important thing to remember is to not only to measure, but also to compare those performance elements to the system replaced by the new services and to report on the results frequently.