Nonresponse Bias: What It Is and How to Reduce It?

Nonresponse Bias: What It Is and How to Reduce It? cover

We talk about nonresponse bias when your survey results are skewed because not all users respond.

To learn more about this, read this article as it explores:

  • The concept of nonresponse bias
  • Why it’s bad for your product
  • Its most common causes
  • Proven ways to reduce nonresponse bias

Let’s get right to it.

TL;DR

  • Nonresponse bias occurs when a part of the survey audience doesn’t participate, which leads to distorted results.
  • We talk about response bias when users provide inaccurate or false responses.
  • Unit nonresponse happens when specific users or complete user groups don’t respond, while item nonresponse occurs when survey participants skip certain questions.
  • An example of nonresponse bias is an in-app survey visible only for a short time so that only highly active users can access it, resulting in a biased dataset.
  • Common causes of nonresponse bias include poor survey design, incorrect audience targeting, unwillingness to participate, failed survey delivery, and unintentional nonresponse.
  • Potential respondents may ignore survey requests if the survey UI is poor or the questions are difficult to understand.
  • The same will happen if the questions are irrelevant to their experience.
  • Inappropriate or culturally insensitive questions as well as data security concerns could make users reluctant to take part.
  • Unintentional nonresponse occurs when users forget to submit the survey or encounter bugs.
  • Keeping surveys short and focused increases the response rate.
  • Adding progress bars sets clear expectations and motivates users to finish the survey.
  • Using segmentation and triggering surveys contextually helps keep the surveys relevant.
  • To avoid confusion, break down double-barreled questions and use them in separate surveys.
  • Translating the surveys, optimizing them for mobile users, and making them compatible with assistive technologies enhances their inclusiveness and accessibility.
  • To motivate users to take part in your surveys, offer incentives such as free access to your product or features.
  • To improve your survey response and completion rates, analyze user behavior and experiment with the best timing to launch them.
  • Userpilot is a product adoption platform with advanced survey features. Book a demo to see how you can design, customize, translate, and trigger surveys.

What is meant by nonresponse bias?

Nonresponse bias occurs when survey results are distorted because a portion of your target audience didn’t participate in the survey.

In other words, if a big part of your user population doesn’t respond to your surveys, the results won’t be valid, especially when their opinions and preferences are very different from those who respond.

However, to be considered nonresponse bias, there must be a systematic issue that doesn’t allow a whole group of users to participate, like the choice of the survey method.

The kind of nonresponse when a portion of users don’t respond to the survey is called unit nonresponse. A unit is simply a person or a specific group of users.

Item nonresponse, on the other hand, is when users don’t respond to some of the questions in the survey.

What is the difference between response bias and nonresponse bias?

Response and nonresponse bias are both kinds of biases that can result in unreliable survey results.

Response bias happens when your users provide inaccurate responses to the survey questions, either intentionally or by accident. For example, they may not understand the question wording or have a skewed view of their preferences or behaviors.

Nonresponse bias, on the other hand, occurs when lots of users don’t respond to the survey and their views are dramatically different from those who respond. This results in an incomplete picture of the situation.

Why is a high non-response bias unfavorable?

A high nonresponse bias is a serious concern because it negatively affects the accuracy and reliability of the survey results.

When there is a large gap between survey respondents and nonrespondents, the survey results won’t reflect the true user sentiment, needs, preferences, or pain points.

This will have a negative impact on the product decisions you make. Since you don’t know about the needs of a significant group of your users, you will simply not be able to address them.

Common reasons why nonresponse bias occurs

What are the reasons for nonresponse bias? Let’s check out some of the most common culprits.

Poor survey design

A survey with unclear, irrelevant, or excessively long questions can discourage respondents from completing it.

The survey UX matters as well. If a survey has poor usability or isn’t aesthetically pleasing, it may put off potential respondents.

Incorrect survey audience

Targeting the wrong audience is another reason for nonresponse bias.

Potential respondents may ignore the survey if they aren’t interested in it its scope or feel they don’t have enough knowledge about the subject matter.

Unwillingness to respond to a survey

Nonresponse bias often occurs because of the unwillingness of your users to participate in a survey.

This could happen if the questions are inappropriate, culturally insensitive, or they have concerns about the security of their data.

The actual outcome variables also have an impact on response rates. For example, if you’re running a survey on exercise habits, users who don’t exercise regularly may ignore it.

Failed survey delivery

Nonresponse bias can also result from failed survey delivery due to technical issues or human error.

For example, if you’re collecting survey data through emails, they may end up in recipients’ spam folders or be blocked by email filters.

Unintentional survey nonresponse

Finally, nonresponse bias can occur when your users unintentionally fail to complete a survey.

This can happen if they forget to share the final survey with your company, lose the survey link, or experience bugs while attempting to complete it.

What is an example of non-response bias?

Let’s imagine you’re running an in-app survey to collect customer feedback on your latest feature release.

You design a survey and trigger it for all your users, and keep it active for one day.

The catch is that only the most active of your users use it daily.

Most of your user population doesn’t, so they won’t have a chance to see the survey. Such a limited window could also exclude users from different time zones.

Consequently, the data collected through this survey will be skewed. That’s because it will come from your power users, who are a biased sample.

Best practices for reducing nonresponse bias and increasing response rates

How do you increase your survey response rates and reduce nonresponse bias? Let’s look at a few tested strategies that may help.

Keep the entire survey short and focused

When designing your survey, start with a clear goal. Decide what kind of data you are after. For example, is it to measure overall user satisfaction or just with a particular feature? This will allow you to keep your survey focused.

Next, keep the survey short and relevant. 2-3 questions, including a single open-ended question, seem to be a good sweet spot. The longer the survey, the fewer users will get to the end.

Finally, make some of the questions optional.

Open-ended questions require more time and concentration to answer. If your user is not in the zone to engage with them at a particular moment, they won’t submit the survey at all if you make all questions compulsory.

Make some questions optional
Make some questions optional.

Add a progress bar in surveys to boost completion rates

Adding a progress bar increases the survey completion rates.

Progress bars tell users how much time the survey may take to complete and motivate them to carry on when they start.

Include a progress bar to reduce nonresponse bias
Include a progress bar to reduce nonresponse bias.

Use segmentation to reach the right survey audience

Survey relevance is the key to high response rates.

To make sure your survey is relevant, target the right user segments. For example, if you’re asking for feedback about the new feature you’ve just released, ask only the users who have actually used it.

How do you segment users and identify target audiences?

You can use their responses to welcome surveys, product usage data, or demographic characteristics.

User segmentation
User segmentation for surveys in Userpilot.

Trigger surveys contextually to achieve high response rates

Another way to boost response rates and hence, reduce nonresponse bias is by triggering your surveys contextually.

So if you’re evaluating a new feature, trigger the survey just when the users have used it. In this way, you increase the chance that they respond. What’s more, their responses are likely to be more accurate because the experience is still fresh in their minds.

To be able to trigger surveys contextually, you need a tool that displays product usage data in real time, like Userpilot.

Trigger surveys contextually to reduce nonresponse bias
Trigger surveys contextually to reduce nonresponse bias.

Avoid asking double-barreled questions to reduce inaccurate or false answers

Double-barrelled questions are difficult to interpret, both for the respondents and for those who analyze results.

Let’s look at this question:

‘How easy is it to use the product and will you recommend it to your friends?

It asks the user for insights on two very unrelated aspects. The product may be super easy to use but still not worth recommending.

If so, which answer should they choose? 1? 5? None? Probably.

To avoid confusion, break the questions down and ask them in separate surveys.

Avoid double-barrelled questions to reduce nonresponse bias
Avoid double-barrelled questions to reduce nonresponse bias.

Analyze where users drop off from the survey and improve accordingly

Product data analysis and tools like heatmaps or session recordings will help you identify the parts of the survey that make users give up.

With such insights, you may be able to identify UI issues that cause friction or a confusing survey question that needs tweaking.

Survey analytics in Userpilot
Survey analytics in Userpilot.

Translate surveys to avoid exclusion

To avoid systematic bias in your surveys and boost response rates, make sure not to exclude any user groups.

For example, if you’re targeting users who are not native speakers of your language, make sure they can actually understand the questions. To avoid misunderstandings, use simple language and avoid jargon.

Better yet, translate it into their native tongue. With machine translation tools like Google Translate, you don’t need to hire professional translators.

And some tools, like Userpilot, allow you to translate the survey questions inside the app. It takes literally a few clicks to enable the localization feature, add a locale, and auto-translate your questions.

Translate your survey questions in Userpilot.
Translate your survey questions in Userpilot.

Make the survey design accessible

Poor survey accessibility could be another way to unintentionally exclude certain user segments.

For example, surveys that rely on visual content may exclude users with impaired vision.

To avoid it, make sure surveys are compatible with assistive technologies, like immersive readers. What’s more, consider adopting the survey color patterns to make them more accessible, just like Asana did.

Asana changed its color patterns to increase accessibility
Asana changed its color patterns to increase accessibility.

Moreover, make sure your surveys are optimized for mobile devices and don’t require rapid internet speeds so that users who rely on mobile phones can access them easily.

Give potential respondents incentives to complete surveys

No matter how loyal and active your users are, sometimes they just need a little bit of extra motivation to complete the surveys.

Choosing the right incentives may be tricky. They need to be affordable and attractive at the same time. If it isn’t tempting enough, it won’t have the desired effect and users may even feel insulted.

For a short survey, a raffle ticket that gives them a chance to win free credit will work nicely. For longer surveys, which require a lot of users’ time, a voucher or free access to a premium plan will be more adequate.

Incentive to take part in the survey
An incentive to take part in the survey.

Test the optimal time for sharing surveys

Your survey response rates may vary depending on the day of the week or the time of the day.

How do you choose the best time to trigger your surveys?

A/B testing is a possible solution.

For example, you could trigger the same survey for half of the target audience on Friday and half on Monday to decide which day is better. Once you have the day, you could run the test in the morning and in the late afternoon. Finally, you could test specific hours.

The downside of split testing is that it’s very time-consuming. Even if you run multivariate tests, it will take a few tests to zero in on the best time. That’s why it should be used at the final stage of testing, to validate your ideas.

To preselect the times you want to experiment with, leverage customer data from the past and see when users are most likely to respond.

Conclusion

Nonresponse bias occurs when a group of users are not able to complete a survey and as a result, its results are skewed.

To avoid this problem, invest in inclusive survey design, keep your surveys short and to the point, and consider rewarding users for their participation. Track completion rates and user product analytics to improve the survey UI and experiment with the best times.

If you want to see how to design and contextually trigger targeted surveys with Userpilot, book the demo!

previous post next post

Leave a comment