We started measuring customer satisfaction on emailed support tickets a little more than a year ago. As an enterprise software company, customer support is a large part of what we do day-in and day-out. We realize (and embrace) the impact it has on the perceived value we create in our products and services. Because of that, we wanted to be able to measure customer satisfaction in near real-time with a system that worked with the customer’s preferred communication channel.
The system works like this: An email comes in from a client, at which point a support ticket is automatically generated and then assigned to an appropriate resource. After a support ticket is resolved, the client is given the opportunity to rank our service on a scale of 1-6 (1 being terrible and 6 being outstanding). If they choose to rank us (here’s the template email they receive after a resolution) they are also given the opportunity to provide additional feedback via a simple web form.
The case for implementing the new customer satisfaction system had two clear benefits:
We’d be able to capture a measurable level of customer satisfaction the instant we provided a resolution to a question or issue. This would allow us to be more aware of, and immediately address, any clients who weren’t completely satisfied. It would also provide a great way to improve future service with specific examples/measurements and praise staff members who were providing outstanding service.
We could eliminate some duplicate processes; namely the act of relaying a resolution to the client (typically via email) and then filling out a time sheet with a similar explanation.
This would allow us to be more efficient with our time, thus providing even better service and perhaps a few more billable hours each year. More value for our clients, more revenue for our company. Win-win.
After more than a year of measuring our customer support, I gathered some results:
Approximately 15% of support tickets receive a ranking
The average ranking (on a scale of 1-6) is 5.493
62.33% of the submissions were ranked a 6 (Outstanding)
Four submissions were ranked a score of 1 or 2 (Terrible or Pretty Bad)
There’s plenty to be learned from this type of data, but one thing stuck out to me:
Three of the four substandard rankings we received came from the same person.
That person only submitted three rankings all year, despite having 21 resolved support tickets (in line with the 15%). So what’s the obvious takeaway here?
You can’t please everyone.
And maybe that’s the truth about customer satisfaction. The question is what should be done about it? Do we focus on the complainers and try to turn them into evangelists? Some people think so, and I’m not so sure they (myself included at times) are wrong.
On the other hand, I’ve met people that don’t like Zappos. I know people that can’t stand Apple. I’ve talked to one or two that refuse to fly Southwest. So where does that leave us?
Maybe the answer is to spend more time learning from the customers that love you and less time trying to please the ones that don’t. What do you think?