Bad Survey Questions: What to Avoid When Collecting Feedback in SaaS
Ever considered how much your customer survey responses can be hampered by bad survey questions?
Well, you should.
Not only do they kill data quality, but they can also affect the overall customer experience in relation to your products.
Keep reading as we explore the different variations of bad survey formats that SaaS companies use, the best way to fix them, and how to structure questions for proper feedback collection.
- Bad survey questions are questions framed in a way that affects the objectivity of survey responses.
- To identify them, look out for questions that seem to have pushed the survey creator’s bias or subjective opinion.
- Other ways of identifying a bad survey question are by looking out for questions that are vague, hard to answer, or require multiple pieces of information from the respondents.
- Survey fatigue happens when respondents are uninterested in taking surveys. And as expected, bad survey questions can lead to this.
- Common examples of bad survey questions are leading questions, loaded questions, biased questions, and double negatives.
- Effective techniques for framing survey questions include asking personalized questions, outlining the purpose of the survey, asking clear and easy-to-understand questions, and avoiding questions with narrow answer scales.
- Ready to start collecting meaningful feedback in-app? Get a Userpilot demo and see how we can help.
What is a bad survey question?
A bad survey question is one that prevents respondents from providing objective answers in research.
This is because it contains biased language, making it difficult for survey respondents to communicate their true thoughts, preferences, and experiences.
Usually, bad survey questions are one of the leading causes of survey bias and can affect the useful feedback you’ll get from customers.
All of which defeats the use of surveys as a customer feedback strategy for SaaS companies.
How to identify bad survey questions
Bad survey questions are not always easy to spot.
Some common survey mistakes are:
- Survey questions that use biased language to influence survey respondents: a good way to spot this is to look out for questions that have adjectives or a tone that stems from personal judgment.
- Questions are usually vague and hard to understand: if your questions leave respondents confused or irritated, that’s a sign of a bad survey question.
- Requesting multiple pieces of information at the same time: be mindful when your survey is full of different requirements for your respondents before completion.
The consequences of bad survey questions
The common pitfalls of bad survey questions are:
- Survey fatigue: survey fatigue happens when a respondent becomes uninterested in taking your surveys. It’s of two types: pre-survey fatigue and survey-taking fatigue.
- Low response rates: bad questions might not be understood, leaving a customer to just avoid it altogether.
- Inaccurate responses: poor survey questions can force respondents to respond inaccurately to a question through no fault of their own.
Bottom line: you won’t be able to analyze the feedback you collect (if any) or make any informed decisions.
Examples of bad survey questions
There are different types of bad survey questions. Below you’ll find the most common ones.
A leading question is one that opens with a subjective view or frames a context. For instance, a student asks his friend, “how strict is the new math teacher?”
His friend’s response will likely quantify the teacher’s strictness, as this question assumes the character of the teacher. Even when such information on strictness has not been mentioned about the new teacher.
For example in SaaS, you can see the following lead questions:
- How great is our hard-working customer service team?
- What problems do you have with customer support?
This gives little room for objectivity and instead leads to a certain answer.
How to fix a leading question?
To fix a leading question, start by taking out the adjective, modifier, or context around the question.
Ensure your question is framed in a way that shows curiosity. Like, “how is our customer service?” not “how great do you think the customer service team is?”
Notice the difference between the two questions. The first shows interest, and the second feels like you’re fishing for compliments.
As the name implies, a double-barreled question indicates two-sidedness. This occurs when a question is structured in a way that requires two different answers to two different issues.
You’ll find it in surveys used to measure a product’s new feature success. It’s usually a two-in-one question joined with a conjunction like and, or.
It will seem like you’re being smart and trying to kill two birds with one stone. Only this time, it will be at the expense of the quality of your survey results.
How to fix a double-barreled question?
Ensure each question helps you achieve just one goal only.
If you want your users to answer two separate questions, make two separate microsurveys. Don’t overcomplicate the survey.
Try answering the question yourself before sending out the survey, that’s an easy way to identify combined questions. Look out for those conjunctions that show the start of another question or issue.
Below is an example of a new feature adoption in-app survey that doesn’t contain a double-barreled question.
A loaded question usually implies a fact or detail about the respondent. It’s structured to confirm details about the respondent that they didn’t give out in the first place.
It’s sometimes mistaken for leading questions that make suggestions about a third party.
Loaded questions are accusatory and assume details about the respondent in relation to your product. For example:
How to fix a loaded question?
To fix a loaded question, start by avoiding assumptions in your questions. Only work with the information users or respondents gives you—nothing more.
You can take it a step further by asking separate questions in your survey research.
Ask about your customer’s personal details, then use these details as context for subsequent questions.
A biased question is one that requires a one-directional answer. It closes off the response in another direction and is framed in a way that favors the survey maker. In summary, there’s no room for objectivity.
An instance of this is asking the same questions to your whole customer base without using segmentation. You’re pretty much assuming everyone is qualified to answer.
Another instance is using closed-ended questions like, “Would you share us on social media?” with yes or no options. This assumes all of your customers have social media.
How to fix a biased question?
Using micro and macro segmentation to ask the right survey questions to the right people is a good way of fixing biased questions. Another option is to ask naturally worded questions.
Ask questions that don’t have a hidden intention. Your survey should also come with answer options that are not misleading.
Survey questions with double-negatives
Double negatives happen when you use two negative words in a sentence. They don’t have to immediately follow each other, but as long as they are in the sentence, they are double negatives.
Grammar rules and even mathematics show that a double negative implies a positive. But in communication, double negatives lead to confusion.
For example, the website isn’t easy to use unless I use the search bar, agree/disagree.
How to fix a double-negative question?
A way to avoid double negatives is looking out for cases where words like not, don’t, unless, cannot, and no are paired in the same sentence.
Then eliminate one of such negative words by rewriting the question. Or consider converting it to a positive sentence if it will lead to a better understanding of the intended question.
Survey questions with poor answer scale
It’s not just the questions that can be biased or misleading in your survey, the same goes for your answer options.
Scales are used to assess someone’s level of sentiment towards something. They are mainly used in customer satisfaction surveys and come with options like:
- Numbers: 1-5, 0-10
- Wording: Very unlikely – Very likely
- Emoji’s: Sad face – Happy face
Consider this question:
“Do you see yourself recommending this product to a friend?”
A simple yes/no seems like the best option, but it won’t help you get a nuanced answer. Especially since the question speaks to the quality of a product too.
How to fix survey questions with poor answer scales?
To correct survey questions with poor answer scales, use balanced and relatable scales. Be sure that your respondent won’t second-guess the accompanying options to a question.
Also, cover all the use cases and provide an open-ended question if necessary.
Random questions have no bearing on the overall goal of a survey, whether it’s an automated survey or not. They are usually out of context and don’t talk about the topics relevant to the respondent at that moment.
They are clearly driven by the survey maker’s interests, trying to push an agenda where it’s not needed.
An example is asking about how satisfied users are with customer support even though users haven’t connected with them in a while.
How to fix random questions?
The right targeting will ensure you don’t come up with questions that feel out of place. So, use behavioral segmentation and target your surveys toward specific users.
Also, be relatable. Make sure you have clear goals for your survey.
For instance, before launching an in-app survey, segment only your power users when asking about feature adoption. They are your best bet for getting insights about the subject matter.
Survey questions with jargon
Jargon means confusing language or terms that only your company uses. Using insider knowledge in a way that leaves your audience feeling confused or left out.
For instance, marketers talk about MQLs or conversion rates, which an average individual won’t understand.
How to fix a survey question with jargon
The opposite of jargon is using clear language. Use jargon only when you’re certain the intended respondents will understand.
Also, use proper segmentation to speak to your audience.
Avoiding bad survey questions: Best practices
For better feedback data collection, follow these best practices when creating survey questions.
Ask personalized questions
To avoid a random question, segment your users to ask them contextual and personalized questions throughout their customer journey. This way the data collected will be more insightful and your users won’t be confused.
These questions can be the type you ask when crafting a user persona or when you’re following up on bad NPS survey feedback.
Clearly outline the purpose of the survey questions
Your survey should have a general goal in relation to your product. Ensure you have specified goals for each survey question that will give you insightful data. So every question should bring you one step closer to hitting the goal.
Also, before jumping to the questions, tell your users what the survey is for. If it’s for improving customer experience make sure the questions are about that.
Questions are simple, straight to the point, and easy to understand
In coming up with questions, use plain and easy-to-understand language. Avoid jargon, negative wording, or assumptions.
Steer clear of biased language or heavily contextual questions which can end up becoming leading or loaded questions.
Avoid having too many questions in one survey
While this depends on the type of question you are asking, generally avoid bombarding your users with a long list of questions.
Ask 1-2 questions (close-ended and open-ended) to be able to collect both qualitative and quantitative data. Don’t try to solve multiple issues with one survey.
Avoid having a narrow answer scale
If your question is not too specific avoid having “yes/no” answers. Leave room for your users to say what they think.
Doing this will provide you with more insights than close-ended or open-ended questions.
Behind every successful SaaS company is a product built on customer feedback. If you can constantly come up with great customer survey questions and collect granular and contextual feedback, you’ll have relevant data to work with.
Ready to start collecting meaningful feedback in-app? Get a Userpilot demo and see how we can help.