Any CIO faced with a meaningless key performance indicator (KPI) scorecard and performance dashboard knows that feeling about statistics: They can paint a rosy glow on your IT team’s performance, while anecdotal evidence tells a different story.
It’s a discussion that I had recently with our senior news writer, Linda Tucci, when it comes to outsourcing KPIs. My argument is that if you allow your consultants or your outsourcing team to designate the metrics and the KPI scorecard — essentially to grade themselves — the metrics themselves fall into question. In theory (and I know of at least one situation where this actually happened) they could lie outright about their own performance, especially if it’s tied to their own revenue stream.
The problem with metrics, KPIs, dashboards and every other self-performance measurement that we try to put into place is this: At best, you get exactly what you’re measuring; at worst, someone games the system but you take the numbers at face value.
A great example of a bad series of metrics comes from my tenure managing a newly outsourced help desk. One of the metrics was the number of completed issues (aka closed tickets). After three months, the contractor numbers were in the green, with greater than 99% of all tickets closed. The onshore help desk had never managed even to graze 97%, so senior leaders were ecstatic! Unfortunately, the user satisfaction scores were in the toilet. What the KPI dashboard wasn’t showing was that the number of user problem tickets had gone through the roof. Further root cause analysis revealed that when users called in, the agents closed tickets as soon as the call was completed, rather than keeping the ticket open to make sure that the actual problem was solved. When the user called back, they generated another ticket and another “solution” as soon as the agents got the user off the phone. Lather, rinse and repeat, with one user problem generating as many as 10 tickets in less than a week’s time.
It was our own fault. We weren’t measuring the actual solution and the users’ satisfaction. Aside from the obvious fact that a completed issue is a meaningless metric in the first place (all issues are not equal), the internal help desk staff members hadn’t needed an artificial construct to encourage them to satisfy the users — the members of the small, four-person team had known that if they didn’t solve the problem on the first pass, when the user called back, the help desk would pass the user through to the original agent. They worked with the product development team to deflect potential user problems proactively, and trained users as much as they helped them with problems. Why? Because we staffed four agents regardless of call volume — that bit of extra work made the agents’ lives easier in the long run. However, with the new outsourcing model, the contracted agents were staffed for call volume. Seemed like a good idea at the time, but why solve a problem if it means that your own hours are going to get cut next week?
We didn’t measure the user satisfaction KPI because it had been an invisible KPI all along. We changed the variables (the help desk agent structure) and were surprised when the same metrics no longer yielded similar results. Shame on us.
We are predicting (along with everyone else) that 2012 will be the Year of Big Data, but the devil is in the details. For some CIOs, the hardest thing they ever tackle will be their very own subset of “small” data on their very own KPI scorecard. May it be more valuable than Twain’s bemoaned statistics.]]>