Never ever mess with your NPS

It's now nearly fifteen years since NPS was first proposed as a measure for customer advocacy; yet one of the questions we are still asked most often is why it could not be made more obvious in a survey that only scores of 9 or 10 should count as a promoter. 

What characterises NPS (and CSAT and Effort) is the drive to reduce the totality of customer experience to a single number. There is clearly benefit in a ‘one-number’ solution (easier for business-wide communication, employee engagement, board reporting) but there is also inherent risk. No matter how comprehensive the science behind the number, it is still just a number. As such it can encourage practitioners to manipulate the survey process to satisfy the demands of directors who like to see trends moving in a positive direction.

Fred Reichheld, the brains behind NPS, held that the three groups (Detractors, Passives, Promoters) reflect natural clusters of respondents where only the most enthusiastic should be classed as promoters. The strength of advocacy implicit in selecting 9 or 10 is the key to the NPS calculation.

We've been keeping a 'rogues gallery' of NPS practice that bends the rules when it comes to collecting NPS. Situations where a brand tries to 'lead the witness' by designing a survey in such a way that prompts the respondent to select a result that improves the score. 

In the Costa example (above) we see the 11 point scale being effectively reduced to a 3-point scale. Here the colours become the primary visual stimulus; what the respondent sees first is the three options of red, yellow and green (which we all recognise as negative, neutral and positive). In effect the scale is explaining overtly to the respondent the calculation behind NPS; it is driving even mild advocates to score 9 or 10. This runs counter to the core principle of NPS where only the most enthusiastic should score 9 or 10 and qualify as promoters.

Example number 2 is just as culpable. Here, Admiral have posed the NPS question and then given respondents the option of selecting 10 or opening up a drop-down menu. (Why didn't the scale start at 0 I wonder!). Both this and the Costa poster I would class as doubtful and unnecessary practice. Just as dubious as those call agents who prompt callers to leave them a good score in the survey they are about to take. 

The Net Promoter Score was given its 11 point scale for good reason and I urge you to respect the principles behind it. Here are three tips for getting the best out of this valuable metric:

  • Stick with the 0 to 10 scale. It may seem odd to discount people who have given you 8 out of 10 but it works. It’s also more accurate when you compare your score with other brands who also use the same scale. 
  • If NPS is your key metric place the question first in the survey. This will give you a ‘cleaner’ result. (NPS can be influenced by the other questions you ask in the survey). 
  • In some sectors (e.g. Financial Services) some responses end up as detractors because they lack the confidence or knowledge to recommend any financial brand. Do not try to exclude these from your score – watching a consistently reported score over time will highlight trends and issues.

If you have some examples of 'less than best practice' for NPS surveys I'd love to see them. Please get in touch.  

Next Post Previous Post