Emily-Ana Filardo
Developing an effective survey can be a form of art. I know what you’re thinking: how difficult can it be, right? You want to know how people feel about something, so you ask them a bunch of questions. Essentially that is what survey development is. But ensuring that you get the kind of data you’re looking for (i.e., how people actually feel about the specific thing you are hoping to measure) can be a difficult task. However, developing a quality survey that taps an important issue can be a supremely satisfying endeavor.
The first hurdle that any survey developer faces is the idea that a question or item can only be interpreted in the way that they intended it. Years of experience has shown me that this is absolutely wrong. For example, if I wanted to know how happy someone was with their physical work environment (i.e., the size of their office, temperature within the office, etc.) I might ask “Overall, how satisfied are you with your work environment?” Seems to be a pretty straightforward question, but work environment might mean different things to different people. For some it could be the physical environment that I intended, but for others it could mean the social environment or the safety of the work they do. I should have been more specific about what I was asking. If I wanted to know about physical environment, why not say “…your physical work environment (e.g., size of your office, temperature within the office, etc.)?” The chances that I’m going to get at what I’m trying to get at are greatly increased. One way to ensure that your item is being interpreted in the way that you intend it to be is to pilot test your questions. This could be as simple as having a few friends and colleagues respond to your questionnaire in your presence and ask them if they had any difficulty interpreting any of the questions.
While the interpretation hurdle is the first that a survey developer faces, it is definitely not the only one. The greatest advice that I got as I was learning about survey development was to always implement the KISS rule. Keep it simple, stupid! Using this rule can help ensure that you avoid many of the other pitfalls encountered in survey development. Here is a list of some of the most common issues in poorly designed surveys:
- Using jargon
This is not the time to show off all of the big words that you know. Unless the jargon is absolutely necessary (i.e., you’re writing a survey for a specific population that uses those terms), use simple language. Don’t use ebullient when you mean enthusiastic or accoutrements when you mean accessories.
- Using double-barreled questions
A double-barreled question is a question that asks two questions in the same item. These questions are impossible to answer for the respondent and to interpret for the researcher. For example, “How satisfied are you with your pay and job conditions?” assumes that one’s pay and job conditions are linked. You know what they say about assuming, right? Someone could be very satisfied with their job conditions, but think they should be paid more or vice versa. However, since there’s only room for one response to this question, which does the respondent choose to focus on and how does the researcher know which one was focused on? Break these items out into two separate questions.
- Using double negatives
“Do you not want to not go to work?” Huh? A sure-fire way to make certain that your respondents have no idea what you’re asking is to use a double negative. However, that does not mean that you should avoid using negatively worded items. Making sure that some of your items are negatively worded or reverse scored allows you to pinpoint people with response tendencies or those who are not paying attention to the items. Double-negatives don’t accomplish this task.
- Ignoring response biases
Since I mentioned response biases in my last point, let me elaborate. A person with an extreme response bias (responding to all items using the extreme points) or an acquiescent bias (responding to all items with fairly positive responses) generally are not paying attention to the content of the item, but simply responding on autopilot. However, when all items are keyed in the same direction, there is no way of determining whether the data is a result of true feelings or a response bias. By adding in reverse scored items, response biases can be more easily detected since those who answer positively to both a negative and positive question are likely not reading the question.
- Leading language
It is important to realize that how a question is worded can have a great deal of impact on how a person might respond to it. For example, asking “To what extent do you agree that a lack of gun control is ridiculous?” pretty much telegraphs your stance on gun control and what you expect the respondent’s stance will be as well. Because most respondents would like to be helpful to your research, you are likely to get the answer that you’re looking for from respondents, but that may not be their true opinion on the matter.
- Survey is too long
My final point with regard to avoiding a poorly designed survey is to manage survey length. People are easily frustrated with very long, boring, repetitive surveys, and the quality of the data that you receive from that survey will likely decrease the longer it is. On average, people respond to about 2.5 questions per minute (assuming a mix of closed- and open-ended items). The ideal length of a survey is approximately 5 minutes. A survey can go to 10 minutes and still provide useful data. However, any longer than 10 minutes and you run the risk of automatic responding to questions at the end of the survey or missing data due to people dropping out before they finish. A small number of well-written items that target the main issue are far superior to a large number of questions asking the same thing over and over again.
There are several other factors to consider when trying to construct a great survey or questionnaire, such as the number of response options for rating items (e.g., Should there be a neutral midpoint? a couple of options or several? etc.), the wording of the response options, and the appropriateness of ranking items. However, if you keep the six pitfalls mentioned above in mind, you will be well on your way to making a survey that collects useful information. Just remember to always KISS.
Emily-Ana Filardo