Surveys help you discover what customers think about your products and make it easier to understand the needs of various demographics. But survey research can be complicated, and biased questioning on surveys can easily lead to flawed data, misinformed decisions, and failed strategies.
In this article, we’ll show you examples of biased wording to help you avoid some of the most common survey missteps. By asking effective, non-leading survey questions, you can help your business gather accurate insights that will ultimately help you or other leaders make informed business decisions. Here are seven examples of survey bias and how to avoid them.
Customers can’t accurately respond to questions if they don’t know the answer. For example, asking questions such as “Will you use our service again in the future?” leads to skewed data and response bias because customers can’t predict the future.
This type of question tends to limit answer choices to yes, no, and maybe, which might lead to an artificially high number of maybes since it’s the safest choice. Instead, use a Likert scale question that asks “How likely are you to use our service again?” with answers ranging from “very likely” to “very unlikely.”
Prevent impossible questions by providing a full range of plausible responses (e.g., include unsure if applicable) and making sure none of the answer choices overlap (e.g., 1 to 5, and 5 to 10) or can be confused with one another (e.g., mobile device, cell phone).
Social desirability bias happens when people answer questions in ways they think will be viewed as socially acceptable by others.
Any topics that touch on socioeconomics, personal habits (e.g., alcohol use), or other controversial issues are prone to social desirability bias and might result in a politically-correct response rather than an honest one. This is why behaviors perceived as good tend to be overreported relative to behaviors perceived as bad. You can minimize social desirability bias by avoiding judgmental or biased wording in your surveys.
Reduce social desirability bias by:
Strongly emphasizing the confidentiality of survey responses
Making it clear that there are no right or wrong answers
Using a third-party platform to conduct your survey
Keeping the survey’s purpose vague
Leading questions are those that suggest a desired response. Rather than gaining unbiased information, these questions typically seek to confirm the survey creator’s assumptions and often alienate the reader.
A survey question such as, “Our restaurant was rated number one by Food Eater Monthly. How much did you enjoy your meal?” uses persuasive framing and assumes the customer had a pleasant experience. Instead, simply ask, “How would you describe the quality of your meal?”
Similarly, leading answer options leave the survey taker feeling manipulated and guilted into choosing a specific one. For example, “Would you volunteer to help sick animals? Yes, in a heartbeat! No, I’m too busy to help sick animals.” Instead, think of using questions such as “How often would you like to volunteer to help sick animals?” or “How likely are you to volunteer to help sick animals?”, using a Likert scale to allow people to answer honestly without feeling shamed.
Avoid leading questions and answer options by keeping opinions and personal preferences out of your survey. Take care not to influence respondents with unnecessary framing.
Loaded questions put the respondent in an awkward position by creating a false or biased premise. If you ask “What do you like most about our service?” you’re assuming the customer likes your service when it may be that your service is simply the most convenient or the customer hasn't taken initiative to make a change.
To avoid loaded questions, think about whether each question applies to every respondent. You might first ask a question such as “Did you enjoy using our service?” and then branch the survey for those who did and did not enjoy using the service.
While loaded questions are often unintentional, they might come across as manipulative and cause people to quit your survey before finishing. If you have an abnormally high drop-off rate, check carefully for a loaded question (or leading question for that matter) before running it again.
Remember that the order in which questions are asked is often just as important to preventing survey bias as the questions themselves. The carry-over effect biases the respondent by altering their thoughts about the subsequent question.
For example, if you poll a customer about their satisfaction with your returns process, you force them to think about a potentially negative experience. Following this question with a general question about satisfaction might lead the respondent to focus on that negative interaction with your company and provide a lower score than they otherwise would. In this case, simply separating the question about a specific scenario from one about their general impression might be enough to prevent carry-over bias.
Carefully consider the order in which questions are asked to ensure they flow logically and that they don’t unintentionally influence one another.
Multiple-choice questions that use long lists tend to cause order bias—either primacy bias (the tendency to select one of the first choices) or recency bias (the tendency to select the last response option). If a multiple-choice question has eight response options, survey takers are generally less likely to select choices in the middle of the pack.
To address this, most survey platforms allow answer choice randomization. With this strategy, each survey respondent is presented the same choices, but in a different order. However, if a final answer choice is “other”, it should not be randomized with the other choices. In this scenario, be sure to include an open response field so respondents can clarify the answer in their own words.
Reduce order bias by limiting the number of options for multiple-choice questions. If you find it difficult to keep the number of choices down, consider consolidating similar options or splitting the question into two. Also, be sure to make use of answer choice randomization through your survey platform so that survey-takers are less likely to focus on only the top-most options.
Sample size is the number of participants your survey requires to statistically represent the population. Your population is the demographic of people you want to learn about (e.g., male vegetarians in California from 18 to 35 years old).
Too few respondents for a population magnifies any response bias and results in shallow analysis of your survey results. While you may not always be able to determine the exact number of your population, a rough estimate is usually sufficient.
Generally, a sample size of 30 is the bare minimum from which meaning can be derived from any population. But larger populations require larger sample sizes (up to a certain point). You can find sample size calculators here, here, and here, where you can also account for factors such as margin of error and confidence level.
Opting for non-leading survey questions that don’t force users to predict the future, shame them into selecting a specific response (or into terminating the survey), or influence their thought process on future questions can help you get the most out of your surveys.
Surveys are a wonderful way to gain valuable, truthful insights into your customers’ opinions of your business. Ultimately, surveys can help you improve processes, products, and services to better suit your consumers’ needs and ensure a loyal customer base.
Just always remember that the data you gather from surveys will only be as good as the questions you ask.
Explore by topic