Recently, IBM struck a deal to acquire Databand.ai, which develops software for data observability. The purchase amount was not announced. However, the acquisition does show the importance of observability, as IBM has acquired similar companies during the past couple years.
“Observability goes beyond traditional monitoring and is especially relevant as infrastructure and application landscapes become more complex,” said Joseph George, Vice President of Product Management, BMC. “Increased visibility gives stakeholders greater insight into issues and user experience, reducing time spent firefighting, and creating time for more strategic initiatives.”
Observability is an enormous category. It encompasses log analytics, application performance monitoring (APM), and cybersecurity, and the term has been applied in other IT areas like networking. For example, in terms of APM, spending on the technology is expected to hit $6.8 billion by 2024, according to Gartner.
So then, what makes observability unique? And why is it becoming a critical part of the enterprise tech stack? Well, let’s take a look.
Also read: Top Observability Tools & Platforms
How Observability Works
The ultimate goal of observability is to go well beyond traditional monitoring capabilities by giving IT teams the ability to understand the health of a system at a glance.
An observability platform has several important functions. One is to find the root causes of a problem, which could be a security breach or a bug in an application. In some cases, the system will offer a fix. Sometimes an observability platform will make the corrections on its own.
“Observability isn’t a feature you can install or a service you can subscribe to,” said Frank Reno, Senior Product Manager, Humio. “Observability is something you either have, or you don’t. It is only achieved when you have all the data to answer any question about the health of your system, whether predictable or not.”
The traditional approach is to crunch huge amounts of raw telemetry data and analyze it in a central repository. However, this could be difficult to do at the edge, where there is a need for real-time solutions.
“An emerging alternative approach to observability is a ‘small data’ approach, focused on performing real-time analysis on data streams directly at the source and collecting only the valuable information,” said Shannon Weyrick, vice president of research, NS1. “This can provide immediate business insight, tighten the feedback loop while debugging problems, and help identify security weaknesses. It provides consistent analysis regardless of the amount of raw data being analyzed, allowing it to scale with data production.”
The Levers for Observability
The biggest growth factor for observability is the strategic importance of software. It’s become a must-have for most businesses.
“Software has become the foundation for how organizations interact with their customers, manage their supply chain, and are measured against their competition,” said Patrick Lin, VP of Product Management for Observability, Splunk. “Particularly as teams modernize, there are a lot more things they have to monitor and react to — hybrid environments, more frequent software changes, more telemetry data emitted across fragmented tools, and more alerts. Troubleshooting these software systems has never been harder, and the way monitoring has traditionally been done just doesn’t cut it anymore.”
The typical enterprise has dozens of traditional tools for monitoring infrastructure, applications and digital experiences. The result is that there are data silos, which can lessen the effectiveness of those tools. In some cases, it can mean catastrophic failures or outages.
But with observability, the data is centralized. This allows for more visibility across the enterprise.
“You get to root causes quickly,” said Lin. “You understand not just when an issue occurs but what caused it and why. You improve mean time to detection (MTTD) and mean time to resolution (MTTR) by proactively detecting emerging issues before customers are impacted.”
Of course, observability is not a silver bullet. The technology certainly has downsides and risks.
In fact, one of the nagging issues is the hype factor. This could ultimately harm the category. “There is a significant amount of observability washing from legacy vendors, driving confusion for end users trying to figure out what observability is and how it can benefit them,” said Nick Heudecker, Senior Director of Market Strategy & Competitive Intelligence, Cribl.
True, this is a problem with any successful technology. But customers definitely need to do the due diligence.
Observability also is not a plug-and-play technology.There is a need for change management. And yes, you must have a highly skilled team to get the max from the technology.
“The biggest downside of observability is that someone – such as an engineer or a person from DevOps or the site reliability engineering (SRE) organization — needs to do the actual observing,” said Gavin Cohen, VP of Product, Zebrium. “For example, when there is a problem, observability tools are great at providing access and drill-down capabilities to a huge amount of useful information. But it’s up to the engineer to sift through and interpret that information and then decide where to go next in the hunt to determine the root cause. This takes skill, time, patience and experience.”
Although, with the growth in artificial intelligence (AI) and machine learning (ML), this can be addressed. In other words, the next-generation tools can help automate the observer role. “This requires deep intelligence about the systems under observation, such as with sophisticated modeling, granular details and comprehensive AI,” said Kunal Agarwal, founder and CEO, Unravel Data.