How to Connect Data to Meaningful and Measurable Results
Highlights on building an IT Metrics Correlation Model to gain the full value of your data.
It's almost Easter, so bags of jelly beans are on sale just about everywhere. I recently bought a two-and-a-half pound bag of them, not to fill my family's Easter baskets but to snack on at the office. Sadly, I suspect the bag won't last the week. I recognize this for what it is, a lack of control. I (usually) deal with it by apportioning the candy into much smaller bags before bringing it to work.
I'm certainly not alone in my fondness for too much of a good thing. (If I were, Nabisco would never have created 100 Calorie Packs, "smart-sized" portions of some of its most popular cookies like Chips Ahoy and Lorna Doone.)
In the case of business, companies sometimes overdo it when it comes to metrics. That came out clearly when I spoke with Deeanne King, vice president of customer care for Sprint, and Harold Goldberg, chief marketing officer for Merced Systems, Sprint's partner in a performance management initiative that led to a turnaround in Sprint's once-troubled call center operations.
A big part of the shift around performance management was really in trying to identify our ultimate goal of improving the customer experience, then focusing on the key metrics that supervisors can control and managers can control. Then you take some of those other things off their critical, everyday path.
Agents can easily become overwhelmed by too much data, Goldberg told me, and need to see no more than a handful of metrics most relevant to their daily activities.
The company also stopped gathering data via a homegrown software application, which required managers to spend much of their time inputting data into spreadsheets, and switched to Merced performance management software that automated much of the process and offered more sophisticated analytics capabilities. Homegrown systems like the one used by Sprint prevent companies from taking a holistic view of their data, Goldberg said, and are not flexible enough to accommodate the dynamic nature of most organizations. He explained:
If you have an agent who starts in the middle of the month and another who starts in the fourth week of the month and you want to compare their performance over three months, you will have two months and two weeks worth of data for one agent and just two months of data for the other. If you magnify this issue over 50,000 agents, the information you're getting is not going to be accurate if you don't account for these differences and normalize the data.
Once Sprint zeroed in on the relevant metrics, it conducted a study of its outliers -- the highest and lowest performers -- to identify which behaviors contributed to top performance. It then introduced a coaching program focused on encouraging those behaviors in agents. Said King:
So, for example, if you are trying to improve issue resolution, you have the ability to see the key behaviors that drive improvement in issue resolution. Then through monitoring and coaching sessions, you can provide agents with feedback identifying the behaviors they need to focus on. So it's more about the behavior and less about the metrics. Our experience showed us that when you focus on the behaviors, the metrics will come.
And Sprint's metrics have improved. According to a case study about the performance management initiative, which you can read on Merced's website, Sprint has boosted its customer satisfaction numbers by more than a third, increased first-call resolution by more than a third, reduced calls per subscriber by more than a third, lowered cost of care operations by more than a third and reduced billing adjustments by three quarters. In 2010's third quarter, Sprint added 644,000 new customers, the most in four years and a second straight quarter of growth.
And, King told me, Sprint has also improved its agent retention rate and reduced the performance gap between its agents so customers get a more consistent experience.