National Telephone Survey (cognitive pretesting)

The Study of Free Access to Computers and the Internet in Public Libraries

telephone survey pretest procedures

National Telephone Survey (cognitive pretesting)

OMB: 3137-0078

Document [doc]
Download: doc | pdf

MEMORANDUM


Date: January 4, 2009

To: Carlos Manjarrez, IMLS

From: Karen Fisher

Mike Crandall

Re: Telephone survey pre-testing


This memo describes in further detail pretesting procedures that will be used in conjunction with the IMPACT telephone survey. Taken together with previous expert review, these generally accepted methods should identify any remaining semantic or organizational problems with the survey (Presser & Blair 1994), while remaining sensitive to time and resource constraints. The methods described in this memo also substantively meet pretesting standards recommended by the Census Bureau (2003). Pursuant to OMB guidelines, we request that these pretesting activities be included and approved as part of the IRC for the final survey.


The development of the telephone survey was an iterative process, involving subject matter experts and experienced researchers from the project’s advisory committee and research team (Appendix 1). Throughout the survey development process, the project advisory committee was an active participant, both consulting on the identification of domain areas and high-value question topics, as well as thoroughly reviewing the survey instruments during teleconferences on two occasions. Consistent with Pressor and Blair (1994), we found these expert review session to be highly productive in identifying and diagnosing problems with the structure of the survey (logic and flow), as well as the questions themselves (comprehension and task related), both of which were revised accordingly.


In addition to this development-stage review, we performed some limited pretesting of the survey with staff and student volunteers at the University of Washington. Though the purpose of this pretesting was to evaluate the survey’s logic and usability and was not specifically focused on comprehension, it nonetheless revealed some cognitive and procedural issues that were incorporated into a subsequent revision. The current telephone survey instrument also incorporates comments from the OMB.


Two further pretests will be conducted prior to the launch of the telephone survey:


  • Cognitive interviews/respondent debriefing. Cognitive interviews are especially useful for identifying semantic or comprehension problems that may result in misunderstanding or misinterpretation of survey questions (Oksenberg et al. 1991). As such, face-to-face cognitive interviews with ten qualified volunteer subjects will be conducted by two senior interviewers from TCI and supervised a member of the project research team. The subjects will be drawn from TCI employees who are also public access computer users and who are representative of the target population demographics.


Because the survey contains a great number of screening and funneling questions, the cognitive interviews will be mostly conducted following the complete administration of the survey to avoid breaking the relationship between questions that might occur using, for example, a think-aloud protocol thorough the survey (Fowler 1995). A limited number of think-alouds will be used during the administration of the survey for questions which the respondent is having particular trouble answering. To reduce participant fatigue, the retrospective interviews will focus on the qualifying questions (Q3-Q7), general use questions (U1-U6), the domain screening questions (M1, C1, B1, H1, S1, G1, V1, D1), and the open-ended questions (E1-E5).


Debriefing interviews will focus on the respondents’ understanding of the terms used in the questions, as well as the cognitive process they used to arrive at their answer. Subjects will be encouraged to discuss areas of confusion or ambiguity, with interviewers probing for details as appropriate for each question to ensure the question is understood. Generally, probing questions will ask respondents to elaborate on their interpretation of questions and their answers. A detailed outline of the protocol for these interviews is attached (Appendix 2).


TCI will prepare a separate summary report for each respondent . The interviews will also be recorded and the transcripts analyzed by the research team, who will determine corrective actions for problematic survey questions. The survey instrument will be revised accordingly in advance of further field testing.


  • Behavior coding/debriefing of interviewers. Following revisions stemming from the cognitive interviews, the survey will be tested under field conditions with a modestly-sized RDD sample (N=40) in ZIP codes with median income in the lowest two national quintiles. Although there is currently no recognized method for determining sample sizes for behavior coding pre-tests, we agree with Zukenberg et al. (1996) that by focusing on respondents with more problematic circumstances we “raise the likelihood of rapidly identifying questionnaire flaws.” Further, this sample size falls within a generally accepted range (Sudman 1983, Sheatsley 1983, Courtenay 1978) and is sensitive to our time and resource constraints.


Interviews will be recorded and two members of the research team will code behaviors that might indicate problems with the survey, such as the interviewer not reading the question as written or the respondent asking for clarification. The behavior code categories developed by Oksenberg, Cannel & Kalton (1991) will be used for this portion of the testing (Appendix 3). In keeping with generally accepted guidelines, if the question was reworded by the interviewer more than 15% of the time, or if the respondent provided adequate answers less than 85% of the time, the question will be reviewed and revised accordingly (Fowler 1989).


In addition to behavior coding, the interviewers will be monitored in real time by a TCI supervisor who will record any issues observed during the telephone testing and suggest changes. During this testing phase, interviewers will also be provided with a rating form similar to that developed by Fowler and Roman (1992) on which they can record problems they encountered in reading the questions or potential problems with respondents not understanding or having a difficult time answering questions (Appendix 4). Interviewers will be notified in advance of this pretesting protocol and will also be debriefed by TCI supervisors.

Bibliography


Courtenay, G. (1978). "Questionnaire Construction." In Hoinville, G., and Jowell, R., Survey Research Practice, chapter 3. Heinemann Educational Books: London.


Fowler, F. Jr., (1989). "The Significance of Unclear respondent interrupted the interviewer; Qualified-

Questions." In Cannell, C., Oksenberg, L., Kalton, G., "New techniques for Pretesting Survey Questions." Research Report. Survey Research Center, The University of Michigan.


Fowler, F. Jr. & Roman, A. (1992). A study of approaches to survey question evaluation. Final report for U.S. Bureau of the Census.


Fowler, F. J. (1995). Improving survey questions: Design and evaluation. Applied social research methods series, v. 38. Thousand Oaks: Sage Publications.


Leeuw, E. D. d., Hox, J. J., & Dillman, D. A. (2008). International handbook of survey methodology. New York: L. Erlbaum Associates.


Oksenberg, L., Cannell, C., & Kalton, G. (1991). New strategies for pretesting survey questions. Journal of Official Statistics, 7:3, 349-356.


Presser, S. & Blair, J. (1994). Survey pretesting: do different methods produce different results? Sociological Methodology, 24: 73-1094.


Sheatsley, P.B., (1983). "Questionnaire Construction and Item Writing." In Rossi, P.H., Wright, J.D., Anderson, A.B. (eds.) Handbook of Survey Research, chapter 6. Academic Press, Inc.: San Diego, CA.


Sudman, S., (1983). "Applied Sampling." In Rossi, P.H., Wright, J.D., Anderson, A.B. (eds.) Handbook of Survey Research, chapter 5. Academic Press, Inc.: San Diego, CA.


Urban Libraries Council. (2007). Making cities stronger: Public library contributions to local economic development. Evanston, Ill: Urban Libraries Council.


U.S. Census Bureau. (2003). Census Bureau standard: pretesting questionnaires and related materials for surveys and censuses. Retrieved from http://www.census.gov/srd/pretest-standards.pdf.

Zukerberg, A. L., D. R. Von Thurn, and J. C. Moore. 1995. Practical considerations in sample size selection for behavior coding pretests. Proceedings of the Section on Survey Research Methods, Joint Statistical Meetings, American Statistical Association, 1116–1121, August 13–17, in Alexandria, VA.


Appendix 1: Expert reviewers


Advisory committee members


  • Rick Ashton, Chief Operating Officer, Urban Libraries Council

  • Michael Barndt, Data Center Analyst, Nonprofit Center of Milwaukee

  • Susan Benton, Strategic Partners Executive, City/County Management Association (ICMA)

  • John Carlo Bertot, Professor and Associate Director, Information Use Management & Policy Institute, Florida State University

  • Cathy Burroughs, Associate Director, National Network of Libraries of Medicine

  • Sarah Earl, Acting Director, International Development Research Center Evaluation Unit

  • Wilma Goldstein, Senior Advisor for Women’s Issues, Small Business Association

  • Jaime Greene, Program Officer, Bill & Melinda Gates Foundation

  • Carla Hayden, Executive Director, Enoch Pratt Free Library

  • Peggy Rudd, Director and Librarian, Texas State Library and Archives Commission

  • Ross Todd, Associate Professor and Director, Center for International Scholarship in School Libraries, Rutgers University

  • Bernard Vavrek, Director, Center for the Study of Rural Librarianship, Clarion University of Pennsylvania


Research team members

  • Karen Fisher, Professor, University of Washington, Information School

  • Mike Crandall, Senior Lecturer, University of Washington, Information School

  • Chic Naumer, PhD Candidate, University of Washington, Information School

  • Carol Landry, PhD Candidate, University of Washington, Information School

  • Samantha Becker, MLIS/MPA Candidate, University of Washington, Information/Evans School

  • Rob Santos and Tim Triplett, Urban Institute

  • Glen and Leslie Holt, Holt Research Consulting

Appendix 2: Cognitive interviews/respondent debriefing protocol


  1. Locate up to 10 qualified subjects (answer YES on Q3 or Q5, YES on Q6 of current draft)

  2. Schedule one-on-one interviews in a comfortable, quiet location.

  3. Introduce the survey.

    1. Thank you for agreeing to participate in this survey about how you and your family use your public library’s computers. This research is sponsored by the Institute for Museum and Library Services and conducted by the University of Washington. Your responses are confidential and will help us evaluate and improve library computer services all over the country.

During this interview, we will also be testing the survey to make sure that when we call other people on the phone they will understand and be able to answer the questions. I am going to record our conversation so that I can fully pay attention to what you say. This should take no more than 1½ hours. Do you have any questions before we get started?

  1. Explain the role of the respondent and interviewer

    1. First, I’m going to read the survey questions to you and want you to answer just as you normally would if you got a phone call or were approached at the mall for a survey. If I ask any questions that you are uncertain about or that require you to think, please let me know. After we get through the survey, I’m going to go back over some of the questions and ask you to talk about your answers.

  2. Read each survey question to the subject and record the answers.

    1. If respondent appears to be having a difficult time responding to specific questions, probe with think-aloud questions. For example, “You seem to be having a hard time answering this question. Can you tell me what you’re thinking about as you try to answer?”

    2. Note major difficulties for follow-up at the end of the debriefing section. Minor difficulties will be accounted for in the analysis of the interview transcripts.

  3. Introduce follow-up questions.

    1. Thanks for answering those questions. Now I’m going to go back over some of them and ask you to talk about some of your answers. I’m not ‘testing’ you about your answers; rather, I want to make sure that the survey is as clear and easy to answer as possible. So, please feel free to be critical!”

  4. Probe specific target questions.

    1. Screening questions. The main purpose of the probes in this section is to ensure that the respondent can differentiate between public library computer terminals that access the Internet and those that only access the library’s catalog or other resources like electronic magazines or reference books. Respondents will be asked to rephrase questions or define terms in their own words and to provide examples of the targeted behavior from their own experience. For example, in follow-up to a positive response to Q4, “Have you used a computer in the public library to access library resources, such as looking up books or placing holds, or to use online resources available through the library’s website like digital articles or books?” respondents will be asked what kinds of resources they used the last time they accessed library resources through a computer in the library.

Questions in this section will also test information retrieval and response category selection, where appropriate. In follow-up to positive responses, interviewers will ask how the respondent recalled the frequency of their use of library resources, what timeframe they were thinking of, and how difficult it was for them to choose from among the response choices provided in the survey.

    1. General use questions (U1-U5). These questions are to gather general information about library computer use, with particular attention to issues important to library researchers and policy makers. Probing questions for this section will focus on the respondents’ comprehension of the survey questions and concepts. For example, in follow-up to U4 which is intended to gauge the extent of Lay Information Mediator Behavior (LIMB) respondents will be asked to rephrase the question and to provide examples of how they have used library computers on behalf of others.

Debriefing questions for U5.1.1 will also be used to assess respondent comfort with answering a potentially sensitive question about parenting behavior.

    1. Domain screening questions (M1, C1, B1, H1, S1, G1, V1, D1). Probing questions for this section will focus on the respondents’ interpretation of the scope of the domain. Each domain screening question will be revisited with respondents being asked to rephrase the question and give examples of the types of activities they associate with key words in the survey questions.

Debriefing in follow-up to negative responses on domain screening questions will probe for respondent sensitivity and adequacy of response codes for interviewers.

    1. Open-ended questions (E1-E5). debriefing questions for this section are to gauge the respondents’ comprehension of the survey questions and their reaction to open-ended and scale questions. For example, in follow-up to E2, respondents will be asked to describe in their own words what regular access means.

  1. General debriefing questions. Respondents will be asked to provide general feedback on their reaction to the survey, including identifying questions they felt uncomfortable with or had a difficult time answering and revisiting questions they refused or had major difficulty answering.

 

Appendix 3: Behavior Code Categories


Interviewer question reading codes

Exact

Interviewer reads the question exactly as printed.

Slight change*

Interviewer reads the question changing a minor word that does not alter the question meaning.

Major change*

Interviewer changes the question such that the meaning is altered. Interviewer does not complete reading the question.



Respondent behavior codes

Interruption with answer*

Respondent interrupts initial question-reading with answer.

Clarification*

Respondent asks for repeat or clarification of question, or makes statement indicating uncertainty about question meaning.

Adequate answer

Respondent gives answer that meets question objective.

Qualified answer*

Respondent gives answer that meets question objective, but is qualified to indicate uncertainty about accuracy.

Inadequate answer*

Respondent gives answer that does not meet question objective.

Don’t know*

Respondent five a “don’t know” or equivalent answer.

Refusal to answer*

Respondent refuses to answer the question.


* Indicates a potential problem with the question.


Appendix 4: Interviewer rating form

Interviewer Rating Form

Instructions:

This form is for you to record potential problems with the IMLS/PAC telephone survey. Please note the question number and check the box in the column corresponding to the type of problem you have encountered:

  • Hard to read: You had trouble reading the question as it was written.

  • R has problem understanding: The respondent did not understand the words or ideas in the question.

  • R has trouble providing answer: The respondent had trouble providing an answer to the question.

  • Comments/other problems: Use this space to record other types of problems with questions or to further explain the problem you encountered.

Question #

Hard to read

R has problem understanding

R has trouble providing answer

Comments/other problems















































































































































File Typeapplication/msword
AuthorMike Crandall
Last Modified BySamantha Becker
File Modified2009-01-04
File Created2009-01-04

© 2024 OMB.report | Privacy Policy