Contextual conversations foster a two-way dialogue between the customer and the brand, as opposed to a survey, which is one-way. Conversations allow brands to:
- Listen to their customers, as well as probe and clarify to gain greater insights.
- Identify issues and gain the opportunity to resolve them in a timely manner.
- Accurately assess experience parameters and identify root causes.
Technology has enabled brands to go beyond simple surveys and engage the respondents through structured conversations that are designed to gain insight into customer interactions and in turn, the experience generated.
While different implementations do have a different impact on the final score measured, so long as the measurement is standard and consistent across the time period under consideration, the impact of different scales can be ignored. One may argue that the colour coding of scales and use of emoticons may bias the customers into giving a higher score.
However, so long as the same scale is used to measure the NPS over time, any improvements or declines in NPS will be associated with changes in the business processes and will have nothing to do with the scale. As we’ve mentioned before, NPS is less focused on the number itself and more focused on the improvement in the number.
While designing surveys, it is important to keep in mind the trade-off between comprehensiveness and speed. Surveys that have a lot of open-ended questions may provide a lot of insight, but the drawback is that most customers would end up dropping off.
On the other hand, a survey that only has multiple choice/structured questions may see a higher completion rate, but the insights provided are limited. We typically recommend a 3-screen approach, where the primary question (NPS/CSAT) is asked first, followed by up to 4 parameter-wise ratings on a 1-5 scale, and finally an open-ended text box to collect further details on the experience.
When Fred Reichheld came up with the NPS methodology, mobile devices were not as widespread as they are today. The primary method of collecting responses was on pen and paper, or through customer interviews.
With the advent of mobile devices, it became much easier to reach out to a wider base of customers by moving to web-based or mobile-based surveys. However, an 11-point scale is difficult to represent on the phone, and customers are more familiar with the 1-5 scale. Hence, the 5-point NPS scale gained popularity.
The NPS on a 5 point scale is obtained by subtracting the % detractors (% of people who rate 1, 2 or 3) from % promoters (% of people who rate 5).
The following guidelines would help in designing the right conversation:
- Clarity: The language and framing of the question should be clear and understandable to remove any ambiguity and prevent drop-offs.
- Reduce bias: The questions must not be leading/biased to prevent inaccurate measurement of metrics.
- Measure only what you can influence.
- Intelligent conversations: Use transaction data and questionnaire responses to direct the flow of questions and ensure only relevant responses are captured.
Most customers are right-handed and respond primarily via mobile. We would want it to be slightly more difficult to provide a negative rating than a positive one, as it reduces the chances of a false negative. False negatives are more dangerous than false positives, especially in scenarios where detractors are called to resolve issues.
CSAT is more suited for measuring initial reactions to the product/service, while NPS is more of a loyalty metric indicative of future behaviour.
However, it’s less about the scale and more about what you do with it, how you monetize the customers who love you and resolve the issues of the customers who don’t.
This is an example of priming the customer. Auto-selecting a score leads to an anchoring effect, where the final response tends to be closer to the anchor.
In this context, the average response given by customers would hover around 8. While this might decrease detractor percentages (A person who would have given you a 6, may end up giving you a 7 or 8), it would also decrease promoter % (A person who would have given you a 9, may feel that 8 is also an acceptable rating).
In general, we recommend only 1 reminder sent per transaction after 1 day, as too many reminders can be seen as spam. However, this truly depends on the nature of the interaction. If the interaction is low involvement for the customer, it might not be critical to providing even 1 reminder.
However, if the involvement is high for an interaction (for example a B2B scenario) it might be useful to add an additional reminder with a different template to drive responses.
A classic method of encouraging customers to give feedback is to offer an award, either offering an incentive or a chance to win a big prize or guaranteeing a small reward. However, this might bias the responses, so caution must be used.
For brands with a presence in rural and semi-rural geographies where the regional language is more prevalent, regional language messaging may build a personal connection. Using data related to the transaction in the messaging to make the invitation more personal (Using customer name, agent name, type of request etc.).
Give a Time-Guarantee on surveys (e.g. It will take 30 secs to fill this survey).
Most importantly, respond to feedback in real-time and act on complaints. This will reduce the “survey cynicism” prevalent in customers and build loyalty towards the brand.