The Customer Service Survey

Over Scripting

by Peter Leppik on Wed, 2013-07-10 16:32

According to a report from Consumerist, some Best Buy customer service reps are not happy that they are being given new scripts and aren't being allowed to deviate.

Over-scripting is a problem in the customer service business, since there's a tendency for management to think they can anticipate every customer inquiry and write the perfect answer. Not true, and my observation is that most contact centers have been moving to give more flexibility to CSRs, not less.

Over-scripting causes a lot of problems:

  • It makes it much harder to establish a rapport with the customer and understand their problems.
  • It's usually obvious that the CSR is reading from a script, and customers hate it.
  • CSRs don't like being micro-managed about every word which comes out of their mouths.
  • It tends to lead the company to focus more on compliance and less on service.

Over-scripting is a really easy trap to fall into in the survey business. When performing a structured interview it really is important to be completely consistent about every word in the question. Even small word choices can shade the meaning of a question and bias the results.

As a result, many companies which do phone interviews create extremely rigid scripts. I've seen instances where the interviewer is literally not allowed to deviate even one word during the course of the entire survey--even if the participant has questions or problems (responses to the participants' questions are also scripted, of course).

That's wrong. The whole point of doing an interview (as opposed to an automated survey) is to establish a rapport with the customer, go in-depth, and leave the customer feeling that you actually listened. Over-scripting makes it impossible to do these things.

On the other hand, we really do need to have consistent questions in order to get consistent and useful data. It's not helpful to go to the other extreme--like one phone survey I encountered where the only question was, "How was your experience?" with just a single open-ended question. That format might be useful for getting some high-level qualitative feedback for training, but it's not going to yield meaningful statistics for decision-making or tracking.

The key is to strike a balance between being consistent on the survey and letting the interviewer be human. We clearly define which parts of the script need to be read exactly as written, and where an interviewer can go off-script as needed. We recognize that some questions don't need to be strictly scripted (for example, when we want to know the reason behind some answer), and we allow the interviewer to acknowledge that the survey questions are scripted (which the customer knows anyway).

This allows us to have a more meaningful dialogue with customers and leave them feeling that we really listened to them, while still getting consistent metrics for tracking and analysis by our clients.

Sorry...This form is closed to new submissions.