Survey Research

Creating Questionnaire Questions

Developing well-crafted questionnaires is more difficult than it might seem. Researchers should carefully consider the type, content, wording, and order of the questions that they include. In this section, we discuss the steps involved in questionnaire development and the advantages and disadvantages of various techniques.

Open-ended vs. Closed-ended Questions

All researchers must make two basic decisions when designing a survey--they must decide: 1) whether they are going to employ an oral, written, or electronic method, and 2) whether they are going to choose questions that are open or close-ended.

Closed-Ended Questions: Closed-ended questions limit respondents' answers to the survey. The participants are allowed to choose from either a pre-existing set of dichotomous answers, such as yes/no, true/false, or multiple choice with an option for "other" to be filled in, or ranking scale response options. The most common of the ranking scale questions is called the Likert scale question. This kind of question asks the respondents to look at a statement (such as "The most important education issue facing our nation in the year 2000 is that all third graders should be able to read") and then "rank" this statement according to the degree to which they agree ("I strongly agree, I somewhat agree, I have no opinion, I somewhat disagree, I strongly disagree").

Open-Ended Questions: Open-ended questions do not give respondents answers to choose from, but rather are phrased so that the respondents are encouraged to explain their answers and reactions to the question with a sentence, a paragraph, or even a page or more, depending on the survey. If you wish to find information on the same topic as asked above (the future of elementary education), but would like to find out what respondents would come up with on their own, you might choose an open-ended question like "What do you think is the most important educational issue facing our nation in the year 2000?" rather than the Likert scale question. Or, if you would like to focus on reading as the topic, but would still not like to limit the participants' responses, you might pose the question this way: "Do you think that the most important issue facing education is literacy? Explain your answer below."

Note: Keep in mind that you do not have to use close-ended or open-ended questions exclusively. Many researchers use a combination of closed and open questions; often researchers use close-ended questions in the beginning of their survey, then allow for more expansive answers once the respondent has some background on the issue and is "warmed-up."


Rating scales: ask respondents to rate something like an idea, concept, individual, program, product, etc. based on a closed ended scale format, usually on a five-point scale. For example, a Likert scale presents respondents with a series of statements rather than questions, and the respondents are asked to which degree they disagree or agree.

Ranking scales: ask respondents to rank a set of ideas or things, etc. For example, a researcher can provide respondents with a list of ice cream flavors, and then ask them to rank these flavors in order of which they like best, with the rank of "one" representing their favorite. These are more difficult to use than rating scales. They will take more time, and they cannot easily be used for phone surveys since they often require visual aids. However, since ranking scales are more difficult, they may actually increase appropriate effort from respondents.

Magnitude estimation scales: ask respondents to provide numeric estimation of answers. For example, respondents might be asked: "Since your least favorite ice cream flavor is vanilla, we'll give it a score of 10. If you like another ice cream 20 times more than vanilla, you'll give it a score of 200, and so on. So, compared to vanilla at a score of ten, how much do you like rocky road?" These scales are obviously very difficult for respondents. However, these scales have been found to help increase variance explanations over ordinal scaling.

Split or unfolding questions: begin by asking respondents a general question, and then follow up with clarifying questions.

Funneling questions: guide respondents through complex issues or concepts by using a series of questions that progressively narrow to a specific question. For example, researchers can start asking general, open-ended questions, and then move to asking specific, closed-ended, forced-choice questions.

Inverted funneling questions: ask respondents a series of questions that move from specific issues to more general issues. For example, researchers can ask respondents specific, closed-ended questions first and then ask more general, open-ended questions. This technique works well when respondents are not expected to be knowledgeable about a content area or when they are not expected to have an articulate opinion regarding an issue.

Factorial questions: use stories or vignettes to study judgment and decision-making processes. For example, a researcher could ask respondents: "You're in a dangerous, rapidly burning building. Do you exit the building immediately or go upstairs to wake up the other inhabitants?" Converse and Presser (1986) warn that little is known about how this survey question technique compares with other techniques.


The wording of survey questions is a tricky endeavor. It is difficult to develop shared meanings or definitions between researchers and the respondents, and among respondents.

In The Practice of Social Research, Keith Crew, a professor of Sociology at the University of Kentucky, cites a famous example of a survey gone awry because of wording problems. An interview survey that included Likert-type questions ranging from "very much" to "very little" was given in a small rural town. Although it would seem that these items would accurately record most respondents' opinions, in the colloquial language of the region the word "very" apparently has an idiomatic usage which is closer to what we mean by "fairly" or even "poorly." You can just imagine what this difference in definition did to the survey results (p. 271).

This, however, is an extreme case. Even small changes in wording can shift the answers of many respondents. The best thing researchers can do to avoid problems with wording is to pretest their questions. However, researchers can also follow some suggestions to help them write more effective survey questions.

To write effective questions, researchers need to keep in mind these four important techniques: directness, simplicity, specificity, and discreteness.

  1. Questions should be written in a straightforward, direct language that is not caught up in complex rhetoric or syntax, or in a discipline's slang or lingo. Questions should be specifically tailored for a group of respondents.
  2. Questions should be kept short and simple. Respondents should not be expected to learn new, complex information in order to answer questions.
  3. Specific questions are for the most part better than general ones. Research shows that the more general a question is the wider the range of interpretation among respondents. To keep specific questions brief, researchers can sometimes use longer introductions that make the context, background, and purpose of the survey clear so that this information is not necessary to include in the actual questions.
  4. Avoid questions that are overly personal or direct, especially when dealing with sensitive issues.


When considering the content of your questionnaire, obviously the most important consideration is whether the content of the questions will elicit the kinds of questions necessary to answer your initial research question. You can gauge the appropriateness of your questions by pretesting your survey, but you should also consider the following questions as you are creating your initial questionnaire:

Order of Questions

Although there are no general rules for ordering survey questions, there are still a few suggestions researchers can follow when setting up a questionnaire.

  1. Pretesting can help determine if the ordering of questions is effective.
  2. Consider asking yourself the following questions:
    • Which topics should start the survey off, and which should wait until the end of the survey?
    • What kind of preparation do my respondents need for each question?
    • Do the questions move logically from one to the next, and do the topics lead up to each other?

The following general guidelines for ordering survey questions can address these questions:

Borrowing Questions

Before developing a survey questionnaire, Converse and Presser (1986) recommend that researchers consult published compilations of survey questions, like those published by the National Opinion Research Center and the Gallup Poll. This will not only give you some ideas on how to develop your questionnaire, but you can even borrow questions from surveys that reflect your own research. Since these questions and questionnaires have already been tested and used effectively, you will save both time and effort. However, you will need to take care to only use questions that are relevant to your study, and you will usually have to develop some questions on your own.

For More Information

« Previous
Continue »