NCSS Results: T-Mobile Keeps Getting Better, Comcast Still Trails
In This Issue
In March we published the 2015 results for the National Customer Service Survey (NCSS) of Communications companies. This ongoing survey tracks customer service quality at AT&T, CenturyLink, Comcast, DirecTV, Dish Network, Sprint, T-Mobile, Time Warner, and Verizon.
In this year's data we find that T-Mobile has extended its record of improving customer service going back to 2012, making substantial gains in many of our survey metrics and the company is now leading the pack in eight of the nine scores we track.
Back in 2012, T-Mobile was performing poorly on our survey. The company had just come out of a failed attempt to merge with AT&T (abandoned in December 2011) and its scores had been sinking. Since then, however, T-Mobile has rebranded itself as the "un-carrier" and made deliberate efforts to be more consumer-friendly. This has been successful, as shown by the fact that T-Mobile's moves to abandon such hated industry practices as two-year contracts and overage charges have now become the norm in the mobile phone industry.
Our data shows that T-Mobile's efforts have extended to improving its customer service more generally, with sustained multi-year improvements across the board in our customer-service metrics. While we don't have any insight into T-Mobile's internal operations, from our data it appears that the company has been making a significant and sustained effort to improve its customer service operations.
Comcast, meanwhile, has posted some small gains over the past two years (CenturyLink, Comcast, DirecTV, Dish, and Time Warner were added to our survey in 2013), but not enough to pull it out of last place in our survey rankings. In 2015 Comcast held the bottom slot in six of our nine metrics: better than in 2013 when Comcast was last place in eight of nine metrics, but still not a stellar performance.
The complete survey data is available to NCSS subscribers, and the Executive Summary can be emailed to you through our website:
In the field of macroeconomics, Goodhart's Law states that "When a measure becomes a target, it ceases to be a good measure."
As an economic theory this is the rough equivalent of Murphy's Law, though with a kernel of deep truth at the core. Macroeconomic measurements distil an enormously complex system down into a handful of simple numbers that require considerable effort to measure. For example, in the mid-20th century in the United States, we had a problem with inflation. Low inflation is desirable because it tends to correlate with economic stability and predictability and encourages the middle class to save and invest for the future. But when policymakers initially tried to slow inflation through wage and price controls rather than addressing underlying problems in the economy, the result was an unbalanced economy and (eventually) the stagflation of the 1970's. Of course this is a grossly oversimplified summary of 20th century economic history, but the point is that by trying to force inflation to hit a target, the inflation rate stopped being a good proxy for economic stability.
Goodhart's Law in Customer Experience
Goodhart's Law applies in the world of Customer Experience, too.
Most of the core metrics in any CX effort (for example, survey scores like Net Promoter or Customer Satisfaction; or internal metrics like Delivery Time) are used because they are strongly correlated with customers' future behavior, positive word-of-mouth, and long-term growth of the company.
But if you try to turn a CX metric into a target, it may no longer be useful as a measure of the customer experience. That's because the things you really want to change (customers' future purchases, positive word-of-mouth, long-term growth, etc.) are the result of many complex interactions inside the company and between the company and its customers. And its often easier to hit a goal by gaming the system than it is to fix the underlying problems.
For example, in the case of ABRA Auto Body I blogged about in January, the company almost certainly did not set out to create a survey which would yield inflated, meaningless scores. Instead, they most likely determined that high survey scores were often strongly correlated to repeat business and new customers through recommendations.
But rather than explore the root causes of high (or low) customer satisfaction and address those, the company probably decided to simply give managers an incentive to hit a certain survey score and let them figure out how to do it.
The result is that it's much easier for a manager to print off a bunch of fliers instructing customers on how to answer the survey than it is for them to think about how the customer journey might be improved. (It's possible that ABRA doesn't even give managers the authority or budget to change the things that might matter, in which case the manager may have no choice but to try to game the survey.)
The lesson should be obvious: If you want your CX metrics to be useful measurements of your customer experience, then you need to be very wary of how incentives invite manipulation.