Pretesting the Questionnaire
Ultimately, designing the perfect survey questionnaire is impossible. However, researchers can still create effective surveys. To determine the effectiveness of your survey questionnaire, it is necessary to pretest it before actually using it. Pretesting can help you determine the strengths and weaknesses of your survey concerning question format, wording and order.
There are two types of survey pretests: participating and undeclared.
- Participating pretests dictate that you tell respondents that the pretest is a practice run; rather than asking the respondents to simply fill out the questionnaire, participating pretests usually involve an interview setting where respondents are asked to explain reactions to question form, wording and order. This kind of pretest will help you determine whether the questionnaire is understandable.
- When conducting an undeclared pretest, you do not tell respondents that it is a pretest. The survey is given just as you intend to conduct it for real. This type of pretest allows you to check your choice of analysis and the standardization of your survey. According to Converse and Presser (1986), if researchers have the resources to do more than one pretest, it might be best to use a participatory pretest first, then an undeclared test.
General Applications of Pretesting:
Whether or not you use a participating or undeclared pretest, pretesting should ideally also test specifically for question variation, meaning, task difficulty, and respondent interest and attention. Your pretests should also include any questions you borrowed from other similar surveys, even if they have already been pretested, because meaning can be affected by the particular context of your survey. Researchers can also pretest the following: flow, order, skip patterns, timing, and overall respondent well-being.
Pretesting for reliability and validity:
Researchers might also want to pretest the reliability and validity of the survey questions. To be reliable, a survey question must be answered by respondents the same way each time. According to Weisberg et. al (1989), researchers can assess reliability by comparing the answers respondents give in one pretest with answers in another pretest. Then, a survey question's validity is determined by how well it measures the concept(s) it is intended to measure. Both convergent validity and divergent validity can be determined by first comparing answers to another question measuring the same concept, then by measuring this answer to the participant's response to a question that asks for the exact opposite answer.
For instance, you might include questions in your pretest that explicitly test for validity: if a respondent answers "yes" to the question, "Do you think that the next president should be a Republican?" then you might ask "What party do you think you might vote for in the next presidential election?" to check for convergent validity, then "Do you think that you will vote Democrat in the next election?" to check the answer for divergent validity.