7.4: Designing surveys

How a survey is designed will directly impact on the success on the market research project. Potential respondents must have knowledge about the survey topic and they should have experienced the events, behaviours, or feelings they are being asked to report. If one is asking participants for second-hand knowledge—asking counsellors about clients’ feelings, asking teachers about students’ feelings, and so forth—it should be clarified that the topic of the research is from the perspective of the survey participant and their  perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete the survey.

Most surveys are made up of closed-ended questions that are measured against a scale. Closed-ended questions ensure that each respondent has to choose one of the provided answers and can ensure consistency and ease of data analysis. Open-ended questions are more qualitative in nature as they allow for the respondent to write their response freely and are not forced into answers that limit the range of their responses. Open-ended questions can provide helpful insights but take longer to answer and can be challenging to analyze across the responses, as the answers can be varied. It is recommended to include at least one open-ended question in a survey, but should be used sparingly. As mentioned in the previous chapter, survey testing will provide insights as to how many – or few – open-ended questions should be included.

Regardless of how a survey is distributed, it’s crucial to have the questions written with as few errors as possible. A survey can include any number and type of questions, and more complicated questions should appear only once users are comfortable with the survey.

Market Research in Action: Alex

Older man with short and tshirt wearing a bike helmet and cycling on a country road.
Image under license by Shutterstock.com

Alex would like to create a survey for the current members of their coffee loyalty program, as they have noticed that sign-ups have decreased significantly over the past few months, and email open rates from current members have also been decreasing. After going through the steps to identify a market research program, they have identified it as , “What types of rewards do customers prefer within a loyalty program—discounts, free items, personalized offers, or experiential rewards?”. Alex has decided to create a survey for current loyalty program members and email it to them with an incentive. They have created a draft survey in SurveyMonkey and would like to test it with some employees and some marketing colleagues to ensure it is free of errors and biases.

Using common survey design errors, some questions that have errors and some proposed solutions are listed below to ensure the survey that Alex sends out to their customers will have a high completion rate and be directly related to the market research decision problem.

Common survey question errors

Question that is too general

Alex suggested to begin the survey with a fairly simple question:

1.  Do you like rewards?

  • Yes
  • No
  • Not sure

This question, although related to the market research decision problem, is very general, as the term ‘rewards’ could mean many things, not only loyalty programs. Alex would be better to ask a more specific question related to the cafe’s rewards program in order to understand the benefits that are most valuable to their customers.

Demographic questions in the wrong place

Alex suggested to include this question near the start of the survey:

2. State your annual household income:

  • $20,000 and under
  • $20,001 to $50,000
  • $50,001 to $75,000
  • $75,001 to $100,000
  • $100,001 and over

This might not be an uncomfortable for everyone, but a lot of folks don’t like talking about how much money they make in a year, particularly if the survey is anonymous. This is why any demographic questions – such as address, phone number, email address, gender, income, etc. – should always be put at the end. The rationale for this is that respondents have already answered the bulk of the survey questions and will be more apt to not abandon the survey if there only these demographic questions remain.

One additional point about demographic questions is that they should only ever be asked if the answer is directly related to the market research decision problem. In the case of Alex’s survey, household income may be of interest in relation to the benefits of a loyalty program, but other more personal questions such as education level or gender might not provide insights into the decision problem. Also, Alex may already have the contact information for the loyalty member program in a database, do it’s not necessary to ask these respondents to fill it in again. In sum, only ask the demographic questions that are vital to the market research decision problem.

 

Question that is too specific

Alex wanted to really dive into the details of the other loyalty programs that the survey respondents are part of, so they suggested this open-ended question in the survey:

3. Please list the loyalty program that you use the most such as a grocery or gas loyalty program. Think about how many times you accumulate points every year with this program and list 3 to 5 benefits of this program.

As mentioned briefly earlier in this module, open-ended questions are challenging to code because the more broad the questions are, the more wider range of answers the respondents will provide. This makes data analysis really tricky. Secondly, most people don’t really remember how many loyalty programs they belong to and defintely wouldn’t remember how many times they would accumulate points every year. Instead, if Alex wants to know what types of loyalty programs their current customers participate in, a closed check-box question could allow Alex insight into any programs that their customers use and then those benefits could be reviewed.

The question could be rephrased to be:

3. Of all of the loyalty programs listed below, please select all that you think you have accumulated points for over the past 12 months. This typically involves using a loyalty program card at a point of purchase like a cash or gas pump. Please select all that apply.

  • Gas pump program A
  • Grocery program B
  • and so on…
Question that is presumptuous

Alex has a hunch that one of the reason’s their customers are part of the loyalty program is for a discount, so they suggested this question as part of the survey:

4. Which discount do you prefer: 10% off or 20% off?

This question assumes that one of the benefits of the loyalty program to the respondent is a discount, and that they would be able to choose between 10 and 20%. This question is also flawed as virtually no one would answer this question with a preference for 10% discount, so the data from this question would not be helpful to finding solutions to the market research decision problem.

Question based on imagination

Alex is interested in exploring a variety of scenarios for expanding the loyalty program, so has come up with this question for the survey:

5. Imagine that you and a group of friends are coming for a coffee date in the summer. As a loyalty member, you would get 50% off your drink if your friends purchase three drinks at regular price and sit inside. How many times would you take advantage of this offer every year?

This question is about a hypothetical situation that may or may not ever occur. It’s always better in surveys to construct question that address past behaviour or ask questions about preferences, and not ask people to put themselves in an imaginary situation.

Question with vague quantifiers

Alex wants to understand the frequency of visits for the survey respondents and will then try to match that information with the data from the loyalty card program tracking software. Alex included this question in the draft survey:

6. Over the past 4 weeks, how often have you made a purchase at the cafe?

  • Often
  • Quite frequently
  • Occasionally
  • Rarely
  • Unsure

The challenge with this question is there is no universal indicator for ‘often’ or ‘occasionally’. For a frequent coffee drinker, often could be every morning and twice on weekends, where someone who goes one per month might think that is ‘occasionally’. It is always best to have a scale that can will be interpreted the same by all respondents. An edit to this question could be:

6. Over the past 4 weeks, how often have you made a purchase at the cafe?

  • I didn’t make a purchase at the cafe in the past 4 weeks
  • 1-2 times
  • 3-4 times
  • For than 4 times
Question with low recall

Alex knows from the cafe loyalty program database, that members visit the cafe more than once and mostly live in the local area. Alex is really interested in how often the loyalty program members purchase specialty coffees, so they included this question in the survey:

7. Over the past 18 months, how many specialty coffees have you purchased? Specialty coffees is any coffee other than a drip or filter coffee.

  • Zero
  • One to four
  • Five to nine
  • Ten to 14
  • More than 15

The challenge with this question is that respondents will not be able to remember how many specialty coffees they purchased over such a long time frame so the recall will not be accurate. Unless the respondent never purchases specialty coffee, this question would probably be a guess by most respondents. Alex should rephrase the question to be in the past week or month, so that the respondent can accurately recall how many lattes, Americanos or cappucinos they have purchased.

Questions with jargon

Alex wrote this question to understand a general level of satisfaction with the existing loyalty program.

8. Please indicate your level of satisfaction pertaining to the gamut of rewards integrated within our loyalty program vis-à-vis your consumer experience.

The challenge with this question is it’s overally complicated and used terms like ‘gamut’ and ‘vis-à-vis’ that may not be understood by all survey respondents. The best practice is to keep the language as simple as possible so all respondents can understand what is being asked.

Alex could reword this question to be simpler:

8. How satisfied are you with the rewards offered in the cafe loyalty program?

  • Strongly dissatisfied
  • Somewhat dissatisfied
  • Neither Satisfied nor Dissatisfied
  • Somewhat satisfied
  • Strongly satisfied
Leading question

Based on Alex’s knowledge of their customers, they are pretty sure that experiential rewards are a big selling point of the cafe’s loyalty program, so Alex included this question in the survey:

9. Given that most customers prefer experiential rewards, do you agree that this should be the primary focus of our loyalty program?

This question is a leading question, as it’s priming the respondent with unproven data (the most customer prefer experiential rewards) and then asking the respondent to agree with this finding. This leads the respondent to inevitably agree with the answer. Instead, Alex, should focus on probing the customer about their individual interested in the cafe’s loyalty program, perhaps a ranking question like this:

9. Please rank all of the existing benefits of the cafe’s loyalty program in the order of top priority to lowest priority.

  • Free drink on birthday
  • Double stamp days
  • Members-only events
  • Pre-sales for member of custom merchandise
  • Members-only line at checkout

 

Double-barrelled question

Alex is interested in understanding the preferences of their customers, particularly related to possible new benefits of the cafe’s loyalty program. Alex included this question in the survey:

10. Do you prefer receiving free items or discounts on your favourite menu items?

  • Yes
  • No
  • Not sure

The challenge with this question is that Alex is forcing the respondent to choose between free items or discounts, but perhaps some respondents may prefer both or neither would be a benefit. A double-barrelled question is essentially trying to ask two questions, making it impossible to answer. Alex could reword the question and break it into two questions to probe more about free items and the level or type of discount, so the question could be answered by the respondent more accurately.

Question not related to the market research decision problem

Alex thought that since they are sending out a survey to the cafe loyalty members, it would be a good chance to ask the members some questions about the traffice after the Sunday morning cycling club. There have been some compliants about long lines and cyclists parking their bikes and obstructing the patio and accessible entrance.

Alex added this question to the survey:

11. How much difficulty have you faced accessing the café due to bike parking or congestion caused by the cycling club on Sundays?”

  • No Difficulty at All
  • Slight Difficulty
  • Moderate Difficulty
  • Significant Difficulty
  • Extreme Difficulty

The issue with this question is that the survey is being sent out to cafe loyalty members with to help find answers to the market research decision problem: “What types of rewards do customers prefer within a loyalty program—discounts, free items, personalized offers, or experiential rewards?”. This question is targetted specifically to cafe customers who frequent the cafe on Sundays, and this may or may not include the cafe loyalty members. Also, this question is not related to the survey topic, so additional questions will impede the flow of the questions and inevitably reduce completion rates.

Alex should consider another type of market research to understand any challenges related to the cycling club visits to the cafe on Sundays and not include this question in the survey.

This video from the Pew Research Center shows some examples of commonly found errors in surveys.

In general, in order to pose effective survey questions, researchers should do the following:

  • Ensure the answers to the questions will help meet the objectives of the survey.
  • Keep questions clear and succinct so they can be consistently understood the same way by all respondents.
  • Ensure the questions are formulated so questions and answer choices and their order as neutral as possible, i.e. they avoid suggesting answers
  • Ensure any key terms are clearly defined.
  • Make sure respondents have relevant lived experience to provide informed answers to the questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Ensure sensitive questions, such as demographic information are at the end of the survey, when respondents will feel more comfortable answering them.
  • Ensure the survey is short enough to ensure that the respondents will be able to concentrate until the end.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

References

DeCarlo, M., Cummings, C., Agnelli, K., & Laitsch, D. (2022, June 28). Graduate research methods in education (leadership): A project-based approach (Version 2.12.14.17-19.22). BC Campus.  CC BY-NC-SA 4.0

OECD (2012), “Good Practices in Survey Design Step-by-Step”, in Measuring Regulatory Performance: A Practitioner’s Guide to Perception Surveys, OECD Publishing, Paris.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Introduction to Market Research Copyright © by Julie Fossitt is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book