Application Performance Management (APM) is defined as the monitoring and management of the performance – speed, availability, and reliability – of software applications. APM strives to detect and diagnose application problems in order to maintain an expected level of service for users. In recent years, the definition of “user” has evolved beyond internal users (i.e., employees using applications to do their jobs), to increasingly mean external customers using web-based applications.
Today’s organizations leverage a variety of services to deliver stronger, more feature-rich and more satisfying digital experiences to customers, often with the aim of driving more conversions. Examples include content delivery networks (CDNs), social media plug-ins and marketing analytics. One nasty by-product of all these services is increased complexity – more elements are “standing” between an organization and its customers than ever before, and each one represents a potential point of failure that can degrade an entire experience.
Modern customers have no patience for websites, mobile sites and applications that are slow or unreliable. According to Nielsen Norman Group, even an extra second or two of delay in load time can create an unpleasant user experience, causing a transaction-oriented site to lose sales. If customers have a poor experience with a brand, they don’t care who or what third-party element may be the cause; it is the brand itself that will take the reputation hit. Given increased web complexity, APM isn’t as clear-cut as it used to be, and IT teams can no longer assume that just because the servers within their walls are up and running, their customer experiences are void of hiccups. Today’s APM strategies need to be much more extensive, with a strong customer experience being the ultimate measure of success. Against this backdrop, Catchpoint Systems has identified six key points organizations should consider as they evolve their APM strategies for the digital business era.
Evolving APM Strategies
Click through for six key points organizations should consider as they evolve their APM strategies for the digital business era, as identified by Catchpoint Systems.
Get Close to Users
Get as close to real users as possible.
The performance and availability of websites, mobile sites and applications tends to degrade the further away a user is from a company’s data center. For this reason, getting the most accurate view of real-world experiences depends on measuring performance and availability as geographically close to users as possible. In other words, an organization cannot expect to monitor applications from their data center in North America, and assume that users in China, Germany, or other far-away regions are having a great experience. Web complexity is again to blame – the further away a user is from the data center, the more elements (CDNs, regional and local ISPs, caching services and more) there are that can impact the user’s last-mile experience.
Don’t Forget Internal Users
A strong customer experience is imperative – but don’t forget internal users.
While the fundamental element in the digital transformation is customers, don’t forget the importance of ensuring high-performing, highly available applications for employees, particularly in remote offices far away from the data center. Many of these applications ultimately serve a customer-facing purpose (for instance, a bank teller’s app in a remote branch), and poor performance can hurt customers’ brand perception, and diminish worker productivity, just as much as a poorly performing web app.
Reactive application monitoring is dead.
Traditional approaches to APM emphasized detecting and diagnosing problems. Today this is no longer good enough. By the time a performance issue has occurred, it is too late, with customers taking to social networks to vent their frustration. In addition, increased IT complexity makes it harder than ever to identify and pinpoint the source of problems. As companies evolve their APM strategies, they need to consider combining a wealth of historical data with advanced analytics, enabling them to proactively identify growing hot spots and prevent problems from happening in the first place. For example, does a particular application slow down on a certain day, time of year, or in a particular geography? In addition, these analytics must be able to precisely identify the source of problems. Without this capability, organizations will find themselves drowning in data, but with no real actionable insights.
IT Operational Excellence
IT operational excellence takes on a new meaning.
IT operational excellence used to mean delivering “good enough” application performance and reliability, with the smallest amount of fully maximized resources. In the new, customer-centric digital business paradigm, IT operational excellence needs to be redefined – optimizing IT to support the best possible customer experience. For example, in a virtualized environment, there may be a certain level of CPU utilization (less than 100 percent) where application performance begins to suffer. In this example, 100 percent utilization is not the ideal. The advanced analytics capabilities described previously can help organizations discover these thresholds. IT operational excellence, in the context of supporting the best possible customer experience, is a significant benefit of an evolved APM strategy.
Get your third-party providers under control.
While third-party services are intended to deliver richer, more satisfying customer experiences that help drive conversions, they can wreak havoc on a business if not properly managed. Some third-party services like marketing analytics are mandatory, but others may not be. There’s no point in offering customers “nice to have” functionality if it prevents them from accessing the web page in the first place. So another key component of modern APM is to parse out various third-party services in real time, see how they’re impacting customers, and modify or quickly remove them if necessary. The advanced analytics mentioned earlier can also help organizations drill down and see when a third-party service is a root cause of a performance issue.
Synthetic and real-user monitoring are both necessary to deliver the full picture.
Synthetic monitoring, which monitors website availability and performance by generating synthetic-user traffic from cloud resources in various geographies, can provide a measure of peace of mind. Companies know their websites, mobile sites and applications are available and can understand load times for users across a wide range of geographies. However, synthetic monitoring does not tell the whole story, because it does not show what users are doing – and what they’re experiencing – within the site or application, especially in examples of infrequent actions or paths.
Real-user monitoring can supplement this view by helping companies understand their customers’ most common landing pages and conversion paths, and what parts of their sites must be prioritized for optimization. However, it can be a mistake to rely on real-user monitoring alone, as it doesn’t provide the most comprehensive, accurate picture of web page and application response time. Case in point: If a website or application takes a few extra seconds to load, some real users will likely abandon it, thereby not having their experience “counted.” A worse case is that relying on real-user monitoring to detect performance problems means waiting until customers are frustrated to investigate the issue. By uncovering issues early — before their impact becomes damaging — synthetic monitoring can keep them from degrading the customer experience.