The pursuit of data-driven decision making has put tracking, logging and monitoring at the forefront of the minds of product, sales and marketing teams. Engineers are generally familiar with gathering and tracking data to maintain and optimize infrastructure and application performance. However, with the power of data, other business groups are clamoring for the latest tools and instrumentation. Quite often the expense of implementation is undersold as merely placing a tag on a site or adding a library, and doesn’t take into account the additional expense of tracking unique aspects of the application. To complicate matters, there is usually quite a bit of confusion on the types of data being captured by the tools a company already has in place.
Gaining Insight from Machine Data
Click through for five ways organizations can gain insight from machine data, as identified by Thomas Overton, Sumo Logic Developer Community.
Infrastructure monitoring provides information about individual servers and how they are handling the load of application and requests. The data includes CPU and memory usage, load, disk I/O, and memory I/O aggregated for the system or displayed individually by process.
Data is either captured by an agent installed on each server or a service connects to machines via SNMP and then data is sent to a centralized location for visualization.
Why is this important? Understanding how your infrastructure performs allows the engineering team to proactively address issues, set alerts, and assess how to scale the environment and optimize performance.
Application monitoring is the information about the performance of an application. The data includes transaction response times, throughput and error rates. A transaction is the work an application did in response to a request from a user and the response time is comprised of any network latency, application processing time, and read/write access to a database or cache.
An agent is installed on each application server or a SDK is used on native applications. Agentless-monitoring solutions exist for applications that support client requests.
Why is this important? Application monitoring enables the engineering team to diagnose performance issues and track errors within an application. In addition to response time, detailed visibility is given into slow code execution via stack traces, and slow database queries are tracked and logged for investigation. Throughput visualization provides insight on how the application performs under various loads and client side monitoring can demonstrate geographical variances for web applications.
Data contained in requests made to the server are logged in files. Referrer, remote address, user agent, header data, status codes, response data, etc., can be captured. It usually comes from an application module, library or SDK handles recording the requests and formatting the data being logged.
Why is this important? Almost any data within the stack can be logged – from server, application and client performance to user activities. Log data can be filtered, aggregated and then analyzed via visualization tools to assess most aspects of your product and underlying infrastructure.
Why is this important? Event data is valuable to the product and marketing teams for understanding how users navigate through an application as well as any areas of friction in the UX (user experience). Generally the event data is visualized within UX flows as a funnel, with the wide top of the funnel representing the area of the application where most users start their interaction and the narrow part at the bottom of the funnel being the desired user action.
Why is this important? These tools can give insight into how users are interacting with the pages of your application between clicks, which aids in identifying UX friction. Heat maps indicate areas on your application with the highest frequency of activity, while recordings can be invaluable to identify how a set of users experience the application.