The Census Bureau plans to conduct additional research under the generic clearance for questionnaire pretesting research (OMB number 0607-0725). We will be conducting a split-panel experiment with alternative versions of introductory text to evaluate whether these different versions affect the accuracy of people’s response to factual questions.
Factual questions are often used in surveys to collect information about a population. The U.S. Census Bureau uses factual questions in a number of surveys, and is seeking innovative ways to improve accuracy of responses to these questions. One such way may be simply having a sentence or two at the beginning of a survey that stresses the importance of accurate answers to the questions. It is possible that stressing the importance of their attention on the question will serve as a motivator for respondents to pay more attention, or to try harder than they might otherwise.
As other research has indicated, instead of being self-motivated to be thorough and focused in answering survey questions to the best of their ability, many respondents satisfice (Krosnick, 1991) and report the first or easiest answer that comes to mind. This research attempts to see if introductory text given prior to a participant answering the question can change the behavior of a “satisficer” into an “optimizer.”
The strategy that we are proposing to employ in this study, is to have respondents verbally or in writing commit to giving accurate answers to factual question. Although this strategy has been investigated by others (e.g., Cannell, Oksenberg, & Converse, 1977; Conrad, 2011) it is not often employed when designing survey questions. Here we plan to refine the commitment text based on what we know of acquiescence response bias–research that has shown that when asking respondents to agree with a statement, there is a tendency for survey respondents to agree with statements, regardless of the content—hence we employ a “you” statement rather than an “I agree” statement (see Table 1). The “you” phrasing may reduce acquiesce bias and encourage a more conversational dialogue.
The experimental manipulations will be embedded onto the end of usability studies that occur at the Census Bureau and/or out in the field. Each participant will answer ten fact-based questions once the primary study is completed. Our objective in this study is to compare answers given by the participant with what we know to be their true answer. We will compare aggregate responses across the conditions and evaluate if any of the conditions lead to a higher accuracy rate. In addition, we plan to conduct a short memory task at the beginning of the session to assess participants’ short-term and long-term memory. The short-term memory task will consist of reading off a list of words and having the participant recall as many of the terms as they can. (The Rey Auditory-Verbal Learning Test; Lezak, 1983). This memory score will be important in the analysis when we statistically control for performance across participants.
Participants in the usability studies will be randomly assigned to one of five conditions outlined in Table 1.
Table 1. Conditions proposed for research on commitment statements.
Condition |
Description |
Introduction |
1. Control |
Factual questions are asked without introductory text |
None |
2. Oral |
Test administrator orally tells the participant about the importance of answering the questions before the participant receives the questions. |
“When you answer the questions that you’ll see on the following screens, it’s very important that you take your time and think carefully so that you answer each question as accurately as possible.”
|
3. Oral w/ commitment |
User/Participant must agree to statement prior to receiving questions.
Test administrator orally tells the participant about the importance of answering the questions and has participant mark paper version of the box.
|
“When you answer the questions that you’ll see on the following screens, it’s very important that you take your time and think carefully so that you answer each question as accurately as possible.”
Test administrator asks participant :
Please say aloud that you commit to answer each question as accurately as possible. |
4.Written |
Written instruction appears on the screen before the questions appear. |
When you answer the questions that you’ll see on the following screens, it’s very important that you take your time and think carefully so that you answer each question as accurately as possible. |
5.Written w/ Commitment |
User/Participant must agree to statement prior to receiving questions.
Written instruction appears on the screen with a commitment statement before the questions appear. |
When you answer the questions that you’ll see on the following screens, it’s very important that you take your time and think carefully so that you answer each question as accurately as possible.
Please check this to show that you commit to answer each question as accurately as possible.. |
To determine the number of participants to include in our five conditions, we conducted an a-priori power analysis. If the proportion of accurate answers among these groups differ by at least 5% then we would need 39 for each condition to be able to detect this difference at the 5% level of significance (see Cohen, 1991).Therefore, in total 196 participants will be needed in the proposed study.
The estimated time to answer the 10 factual questions is 8.5 minutes. The estimated time for completion of the short term memory task is 1.5 minutes. Thus, the total time for the experimental session is 10 minutes. The maximum estimated burden for this research is a 33.3 hours.
List of Questions:
Questions will be randomly assigned.
Q1. From the time you walked into the room to the time you answered the last question, how many minutes do you think have passed?
Q2. How many different Web pages did you see during the session today?
Q3. How many different Website search tasks did you work on today?
Q4. One of the first questions we asked you at the beginning of the test was:
During the last year, how many times did you complete a survey on the Internet?
What answer did you give to this question?
Q5. We also asked you at the beginning of the test:
Five years ago, about how many hours did you use the Internet during a typical week?
What answer did you give to that question?
Q6. Another of the first questions we asked you at the beginning of the test was:
How familiar are you with the Census Bureau Web site, www.census.gov (e.g., location, tools, terms, data)?
Extremely Familiar
Very Familiar
Moderately Familiar
Slightly Familiar
Not Familiar At All
What answer did you give to that question?
Q7. On what day of the week did you speak with someone on the phone to schedule your visit here today?
Q8. How many minutes did you spend answering the questionnaire that asked about your computer use and Internet experience?
Q9. How did you first hear about participating in the study with us? (We could include options, e.g., craigslist, word of mouth, flyer, metro express, other with a fill in the blank).
Q10. When you spoke with someone by telephone to schedule your visit here today, you were asked for what two purposes you used the Internet, besides checking your email. What two purposes did you tell the person over the telephone?
Q11. How many emails did you receive from someone at the Census Bureau about your visit here today, for this study?
Q12. How many emails did you receive about your usability session? Include all emails received from the Census Bureau about participating in this study.
The short-term memory task consists of the following instruction read aloud to the participant: “I am going to read a list of words. Listen carefully, and when I stop, please tell me all the words you can remember. It doesn’t matter in what order you repeat them. Drum, Curtain, Bell, Coffee, School, Parent, Moon, Garden, Hat, Farmer, Nose, Turkey, Color, House, River. Go ahead.”
References
Cohen, J. (1992). Statistical Power Analysis. Current Directions in Psychological Science, 1(3), 98–101.
Conrad, F (2011). “Interactive interventions in web surveys can increase response accuracy.” Proceedings of the American Association for Public Opinion Research (AAPOR) 66th Annual Conference. Phoenix, AZ, May 2011
Cannell, C., Oksenberg, L., Converse, J. (1977). Experiments in interviewing techniques : field experiments in health reporting, 1971-1977. Hyattsville, MD: National Center for Health Services Research.
Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236.
Lezak, M. D. (1983). Neuropsychological assessment. New York: Oxford University Press.
The contact person for questions regarding data collection and statistical aspects of the design of this research is listed below:
Erica L. Olmsted-Hawala
Center for Survey Methods Research
Room 5K104D
U.S. Census Bureau
Washington, D.C. 20233
(301) 763- 4893
Erica.l.olmsted.hawala@census.gov
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | olmst001 |
File Modified | 0000-00-00 |
File Created | 2021-02-02 |