Different Types of Bias in Surveys and How to Avoid Them14 min read
Get The Insights!
The fastest way to learn about Product Growth, Management & Trends.
What is survey bias?
Survey bias occurs when feedback deviates based on influences from the surveyor and/or respondent.
Such a deviation could hinder you from collecting the most accurate data and even offer biased insights that don’t necessarily reflect user sentiment.
How can survey bias hurt your survey research
Whenever you gather data with minor, moderate, or extreme bias, you’re losing valuable data that would have otherwise been helpful to your survey goals.
Here are a few issues that could result from survey bias:
- Inaccurate answers. Asking a leading question in your surveys may lead to inaccurate survey answers even if the respondents think they’re providing truthful responses (which in turn gives you a false perspective of how users feel).
- Repetitive research. Whenever survey respondents fail to answer honestly, the most likely outcome is that you’ll need to repeat the online survey in order to collect more accurate feedback the next time around — costing you more time, money, and team resources.
- Ineffective strategies. Survey responses can often inform the decisions of product developers and marketers, and responses that don’t accurately reflect customer sentiment could lead to wrong decisions.
Different types of survey bias
Survey responses may become biased due to various reasons. Familiarizing yourself with the different types of survey bias will help you be more effective when trying to avoid such biases.
Here are nine types of survey bias to be on the lookout for:
Non-response bias
Non-response bias — also known as participation bias — occurs when survey participants are unwilling or unable to respond to a survey question or an entire survey.
Having too many questions in your survey can also lead to non-response bias as longer surveys tend to get lower response rates. Another reason can be wrong targeting.
How to fix non-response bias
To avoid non-response bias, consider the following strategies:
- Pre-Survey Communication: Clearly communicate the purpose of the survey, reassure participants about data confidentiality, and highlight the significance of their input to encourage their participation.
- Survey Design: Design a concise and easy-to-understand survey. Use simple language, and conduct pilot testing to identify any potential issues or confusion.
- Offer Incentives: Provide small rewards or incentives (like gift cards) to motivate participants to take part in the survey. Ensure that the incentives are suitable for the target audience.
Sampling bias
Sampling bias occurs when surveys are deployed in such a way that some members of the user base are systematically more or less likely to be selected in a sample than others.
Here’s an example:
“How do you feel about our product’s pricing?”
Pricing-related questions are a common inclusion in user feedback surveys, but they’re prone to sampling bias.
Inactive users who churned due to their dissatisfaction with product pricing aren’t going to be represented in the survey sample while power users who get the most value are highly likely to respond.
Customers who are unhappy with product pricing are also more likely to feel strongly enough about the question to submit a response. In contrast, those who are moderately satisfied with the cost of their subscription will feel less compelled to share their opinion on pricing.
How to fix sampling biased responses
Here’s how to fix the question:
By rephrasing the question, we increase the likelihood of receiving responses from satisfied customers who could balance out the loud minority of unsatisfied customers.
Other ways to avoid sampling bias include:
- Segmenting customers based on welcome survey results.
- Segmenting survey recipients based on their goal, JTBD, or the features they’ve engaged with.
- Offering multiple-choice answers for users who want to choose neutral responses.
Confirmation bias or selection bias in surveys
Confirmation bias often starts with biased question phrasing that influences a respondent’s answer toward the surveyor’s existing belief.
Here’s an example:
“Is feature A better than feature B?”
This increases the odds that the respondent will select feature A (and confirm what the question subtly implied).
How to fix confirmation bias in surveys
Here’s how to fix the question:
By presenting both options as equals and rephrasing the question towards preference over objective superiority, the surveyor is more likely to get unbiased feedback on which feature users actually believe to be more valuable.
Social desirability bias
Social desirability bias occurs when survey takers answer questions in a way that they (consciously or subconsciously) believe makes them look better to others — prioritizing societal approval over honest opinions and invalidating the survey altogether.
It’s similar to conformity bias except, in this case, respondents are basing their responses on what they believe others would approve of rather than existing answers that they’ve observed first-hand (such as in Solomon Asch’s conformity experiments).
Here’s an example:
“Do you think we should reduce the monthly subscription cost of our product?”
Because respondents believe that the majority of their fellow users would benefit from a discount and therefore support price reduction, they are more likely to answer yes.
How to fix social desirability bias in surveys
Here’s how to fix the question:
Asking for answers in the form of a 1-10 rating can help you avoid bias since it’s less binary than asking if you should lower prices. If quantitative responses are too vague for your pricing decisions, then you could always send a follow-up survey to learn more about why respondents chose a certain rating.
By narrowing the scope of the question to the user’s current subscription rather than the broader product pricing for all customers, you also remove the influence that peer pressure and social approval have on the idea of discounted prices.
Extreme response bias
Extreme response bias is the tendency for respondents to answer on the extreme end of a spectrum despite not holding an extreme view. For instance, an in-app customer satisfaction survey might use a five-star rating system to see how happy users are with the product.
Here’s an example:
“How much do you like our product on a scale of one to five stars?”
Because respondents are more likely to respond on either end of the spectrum instead of selecting a middle option, users who may really feel like your product deserves three or four stars are still going to give you the full five.
Conversely, those who feel your product is a two will likely only give it one star. This occurs because users either love or hate products when responding to surveys, even if that extremity doesn’t reflect their true stance.
How to fix extreme response bias
Here’s how to fix the question:
Asking users how they feel about the product instead of how much they like it, makes you more likely to get honest feedback from respondents that fall into the middle of the road.
Other ways to avoid extreme response bias include:
- Avoid having polarizing language in survey questions such as like, dislike, love, or hate.
- Ask users to explain their ratings so you can cross-reference honest responses.
Neutral response bias
Neutral response bias is the opposite of extreme response bias. The easiest way to spot this bias is if you notice that responses are consistently in the middle of a range or Likert scale on answers for every question.
This could happen for a couple of reasons. Users may be unclear on what you’re asking, on the fence due to vague phrasing, or have gotten bored with the survey and are therefore voting neutral on everything to be done with it sooner.
Here’s an example:
“We have recently overhauled the user interface (UI) on the product to streamline navigation and reduce clutter for our end-users, do you think the new interface is better?”
In addition to the question being too long, it has a vague conclusion since users may not be sure what criteria would constitute a “better” UI.
How to fix neutral responses bias in surveys
Here’s how to fix the question:
By shortening the question and clarifying the criteria as user preference, respondents are less likely to give you “false neutrals” as a result of boredom or confusion.
Other ways to avoid neutral response bias include:
- Use clear phrasing so users can understand the question and thus provide more accurate data.
- Consider binary yes-no surveys as a last-ditch effort against neutral response bias.
Question order bias
Question order bias is the phenomenon in which feedback can be affected by the order in which survey questions are organized. For instance, users may be more likely to select options higher on the online survey list.
The first few answer options that respondents see can also impact their responses to subsequent questions. For instance, if you ask a user how highly they value analytics and then ask them which features they look for in a product they’ll most likely choose analytics first.
How to fix question order bias in surveys
There are two approaches to avoiding order bias in surveys:
- Manual sorting. Carefully sorting questions in such a way that their order won’t affect the integrity of subsequent answers is one approach.
- Randomized shuffling. If you’re concerned about confirmation bias compromising the manual sorting process, you could use survey tools so that questions appear in a randomized order each time.
Reporting bias
Reporting bias is described as the selective revealing of information, suppression of information, or a combination of the two. It’s worth noting that reporting bias can occur before, during, and after surveys by both respondents and surveyors.
Common examples include:
- Premature reporting of results
- Selective reporting of outcomes
- Under-reporting of negative results
- Not reporting conflicts of interest
Other scenarios where the results of a survey are skewed due to the method they’re reported also qualify.
How to fix reporting bias in survey questions
A few ways to avoid reporting bias include:
- Transparent outcomes. If everyone is able to readily access the outcomes, responses, and results, then selective reporting is less likely to occur.
- Multiple encoders. Having multiple data analysts interpreting, encoding, and reporting survey responses mitigates any risks of intentional or unintentional confirmation bias.
- Follow-up interviews. Conducting interviews with customers can help you verify if the interpretation of their responses is consistent with their actual beliefs and opinions.
Survivorship bias
Survivorship bias occurs when teams focus their research on individuals or groups that have already passed specific selection criteria while ignoring those who haven’t. In SaaS, the most common scenario would be focusing exclusively on users that have reached the activation point or are paying subscribers.
Narrowing your analysis to subsets of users who have already reached a certain threshold in the user journey can lead product teams to false conclusions based on responses from this non-representative sample pool.
Here’s an example:
“Do you think our product is affordable?”
Depending on who you ask, you’ll get different answers.
If you only survey:
- Active users then most of them will say yes.
- Paid customers then all of them will say yes.
- Power users then many of them will say that the product is underpriced.
In contrast, including inactive users or churned customers in your research would bring different results.
How to fix survivorship bias
It can be hard to show the same surveys to all your users. For instance, in-app surveys won’t be seen by inactive users and churned customers. That said, you can show different surveys to all your users but ask them the same questions.
What this would look like in a real-life scenario would be to conduct a correlative analysis on your power users to see how they feel about your product’s features, pricing, and overall user experience. Afterward, ask the same set of questions on churn surveys or win-back email campaigns.
This will help you hedge your results between the most and least successful users.
How to eliminate survey biases to improve your survey data
Asking bad survey questions (or too many questions) can lead to equally bad answers. To make sure your customer feedback isn’t tainted by poor strategies, here are four ways to eliminate survey biases and get more accurate!
Avoid biased survey questions
The easiest way to stumble into flawed survey results is to ask the wrong questions.
Here are a few biased survey questions you should avoid:
- Leading questions. Leading questions are questions that are framed, phrased, and asked in a manner that leads respondents toward a specific answer.
- Multiple answer questions. Allowing respondents to choose more than one answer per question will make it difficult to interpret their responses without bias.
- Loaded questions. Loaded questions are questions that include a controversial and/or unjustified assumption that attempt to trap users into agreement.
- Jargon in survey questions. Using technical jargon, industry terms, or obscure expressions could make it difficult for respondents to understand your survey questions.
- Double-barreled questions. Double-barreled questions are questions that include more than one inquiry but only allow one answer (e.g. rate our product’s features and pricing from 1-10).
Ask fewer questions
While asking the wrong questions is an issue, asking too many questions, in general, can cause problems as well. According to data from SurveyMonkey, surveys that took more than seven or eight minutes to complete saw completion rate drops between 5% and 25%.
Here’s a look at how the number of survey questions can impact response times:
In summary, make sure your survey is as short as possible to reduce the risk of nonresponse bias.
Minimize bias by adding open-ended questions to your survey
Giving users the option to answer questions in their own words minimizes bias and leaves fewer details open to interpretation. The more detailed responses are, the easier it is to avoid acquiescence bias and dissent bias (described in the extreme response section above) as well.
Segment survey respondents
Another effective approach is to segment survey respondents to minimize bias.
Segmentation involves categorizing respondents into different groups based on specific criteria, which can help you analyze their responses more accurately and make meaningful comparisons.
Start by defining clear research objectives to guide your segmentation efforts. Choose relevant segmentation criteria that align with your research goals, such as demographics, psychographics, or behavioral factors. Ensure that these criteria are chosen carefully and are pertinent to the insights you seek.
Craft survey questions that are tailored to each segment’s characteristics, making sure that questions are unbiased, neutral, and free from leading language. Provide balanced response options for multiple-choice questions to avoid favoring any particular segment.
Conclusion
As you can see, there are various types of response biases that could impact the veracity of gathered data. Similarly, there are plenty of pitfalls that surveyors could fall into during the survey design process. If you follow the strategies in this guide, then you’ll be one step closer to collecting accurate customer feedback.
If you’re ready to reap the benefits of microsurveys and survey analytics, then it’s time to get your free Userpilot demo today!
[/vc_column_text][/vc_column][/vc_row][/vc_section]