Memorandum
Date: September 4, 2018
To: Margo Schwab, Desk Officer
Office of Management and Budget
From: Emilda Rivers, Division Director
National Center for Science and Engineering Statistics
National Science Foundation
Via: Suzanne Plimpton, Reports Clearance Officer
National Science Foundation
Subject: Request for approval of an online data collection with respondents recruited from Amazon’s Mechanical Turk (MTurk) for testing the definition of individual innovation in NCSES surveys
The purpose of this memorandum is to inform you that the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF) plans to conduct exploratory quantitative testing under the generic clearance for improving survey projects (OMB control number 3145-0174). This project will use Amazon’s Mechanical Turk (MTurk) to recruit subjects for an online survey to test a) a definition of individual innovation and b) questions about individual innovation.
Background
In an effort to control costs for pretesting questionnaires, agencies are exploring alternate methods. One such method involves pretesting of questions using online convenience samples, whether by including follow-up probes to questions, or by conducting split-ballot experiments. Participants are sometimes recruited using crowdsourcing platforms that pay people to perform small tasks called human intelligence tasks (HITs). Other statistical agencies, notably the Bureau of Labor Statistics (BLS) and the National Institutes of Health-National Cancer Institute (NIH-NCI), have used these crowdsourcing platforms to recruit participants to complete online surveys. NCSES has recently conducted an exploratory study to evaluate the effectiveness and utility of using one specific crowdsourcing platform, Amazon’s Mechanical Turk (MTurk), and an online survey platform, specifically with an eye toward future use for pretesting questionnaires.
NCSES completed an assessment of online convenience sample sources such as crowdsourcing platforms and online survey sample providers in 20171 and updated the assessment in 2018.2 From these assessments, we identified MTurk as one of the most promising crowdsourcing platforms because it has a larger available sample than other crowdsourcing platforms. It also has the most extensive features, allowing requesters a great deal of control over who may participate in their surveys. In addition, BLS and NIH-NCI have used MTurk in their past efforts to recruit participants to complete online surveys.
Though NCSES currently collects information about innovation that occurs in a business setting (or, “in business, government, and academic settings”), there are no attempts to measure innovation that individuals undertake on their own, independently of their work for pay. There is no common definition of individual innovation, but NCSES has developed its own working definition based on research by other organizations and academia. 3 This project is a first attempt to identify means that might help close the data gap on individual innovation. This two-phase study will not be used to produce estimates of individual innovation in the United States. Rather, this is a methodological study that will be used to inform possible future questionnaire development.
This project will be conducted in two phases, which are detailed below. The first phase tests a group of individual innovation vignettes to determine which vignettes are more effective in getting respondents to recognize what is individual innovation. The second phase asks respondents about their experiences with individual innovation. Vignettes from Phase 1 may be used in the Phase 2 questionnaire to help respondents understand the definition.
Phase 1
Because individual innovation is not a widely understood concept, and because the NCSES definition is nuanced, the first goal of the project is to determine whether or not respondents can accurately identify what has the potential to be individual innovation. The first phase of this project achieves this goal by presenting respondents with the working definition, followed by a series of vignettes. Respondents are then asked whether or not the vignette describes innovation, the respondent’s confidence level with that decision, and why the respondent responded as they did.
An experiment will be embedded into Phase 1. Because of concerns that people will be more likely to identify a vignette as innovation because the activity is attributed to a male, the names used in the vignettes will be systematically altered to be clearly male or clearly female. The questionnaire for this phase, including some vignettes, can be found in Attachment 1. (Additional vignettes, similar to those currently found in Attachment 1, may be developed before the survey goes live.)
Phase 2
The goal of phase 2 is to test questions about individual innovation, and evaluate their potential use. Phase 2 will not be used to derive nationally-representative estimates. Breakoff rates, item nonresponse rates, and other metrics will be used to identify what questions might reasonably be asked on a nationally representative survey. The questionnaire for Phase 2 (found in Attachment 2) may or may not include vignettes from Phase 1. That determination will be made after Phase 1 is completed, and we have determined which vignettes (if any) helped respondents correctly identify individual innovation.
Opening questions ask if the respondent has engaged in individual innovation activity. If the respondent indicates that they have conducted activities that might qualify as individual innovation, the information collection then gathers data about those activities. The survey also includes the collection of some demographic information.
Methodology
We will use online data collection with participants recruited from MTurk. As MTurk participants are self-selecting, NCSES does not expect them to be statistically representative. Although the MTurk sample may not be statistically representative, some studies using MTurk samples have obtained similar results to surveys using probability-based samples.4 As a result, NCSES will not use the data collected in this study to make inferences or produce statistical estimates.
Samples obtained from MTurk will be studied to examine the internal validity and the demographics of the MTurk participants. All results will be interpreted with caution given the sample was pulled from MTurk and not from a probability-based sample. Even so, this study allows NCSES to conduct low-cost, rapid-response questionnaire development on the topic of individual innovation.
For each of the two phases, NCSES will use MTurk to recruit participants for this survey. A draft of the MTurk post is provided in the attachments. The ad will be shown only to MTurk workers who reside in the United States (including Puerto Rico and other territories). Once participants are recruited, they will be given a link to the online survey instrument, which will be hosted by NCSES’s contractor for this research, Mathematica Policy Research, Inc.
Participants
In Phase 1, NCSES will recruit approximately 250 participants to read the vignettes and answer questions about them.
We intend to recruit no more than 10,000 participants for the Phase 2 screener, in order to obtain 500 completed interviews on the Phase 2 survey (Attachment 2). Phase 2 will end when 10,000 individuals have completed the screener or 500 individuals have completed the full survey, whichever comes first. Low incidence rates of individual innovation found in recent academic studies suggest that a large number of respondents will be needed in order to identify 500 respondents who meet the criteria for individual innovation. In those academic studies, the incidence rates have been estimated as follows: 5, 6
In the UK, 6.1% of the population.
In the U.S., 5.2% of the population.
In Japan, 3.7% of the population.
In Finland, 5.4% of the population.
In Canada, 5.6% of the population.
In South Korea, 1.5% of the population.
The incidence rates reported above were derived from surveys conducted by different sponsors, using different modes. The incidence rate in the US-based MTurk population is unknown.
Burden Hours
For the first phase it is estimated that the respondent will spend 20 minutes completing the survey. With 250 respondents the burden estimate for Phase 1 is 84 hours.
For Phase 2, up to 10,000 respondents will complete the screener, but only 500 will complete the full questionnaire. The screener should take each respondent no more than three minutes (10,000 * 3 minutes = 500 hours). The full questionnaire for phase 2 should take 20 minutes (500 respondents * 20 minutes = 167 hours).
The total number of burden hours for phases 1 and 2 is 751 hours.
Payment to Participants
Phase 1 and 2 participants will receive $3.00 for completing the survey, a typical rate for similar tasks. Participants who complete the Phase 2 screener, but not the full survey, will receive $0.30. These amounts are industry standards and have been used by NCSES in the past.
250 (Phase 1) * $3.00 = $750
500 (Phase 2 full survey) * $3.00 = $1,500
10,000 (Phase 2 screener) * $0.30 = $3,000
Total payment = $5,250
Informed Consent
At the beginning of the survey, participants will be informed of the OMB control number, the expected survey completion time, and the voluntary nature of the study. In addition, participants will be informed that the data they provide in this study will reside on a server outside of the NCSES domain and that NCSES cannot guarantee the protection of survey responses.
Survey Schedule
The tentative schedule for the survey is as follows:
Proposed Date |
Activity/Deliverable |
September 4, 2018 |
OMB submission for approval |
September 25, 2018 |
OMB clearance |
October 15, 2018 |
Launch survey Phase 1 |
November 16, 2018 |
Complete Phase 1 evaluation |
December 12, 2018 |
Launch survey Phase 2 |
January 14, 2019 |
Complete Phase 2 evaluation |
February 18, 2019 |
Final report |
Contact Person
Audrey Kindlon
Project Officer
Research and Development Statistics Program
National Center for Science and Engineering Statistics
National Science Foundation
703.292.2332
Attachment 1: Phase 1 Questionnaire and Vignettes
Attachment 2: Phase 2 Questionnaire
1 Chandler, J., Poznyak, D., Sinclair, M., and Hudson, M. (2017) “Use of Online Crowdsourcing and Online Survey Sample Providers for Survey Operations.” Mathematica Policy Research Report for the National Center for Science and Engineering Statistics.
2 Chandler, J. (2018) “Selecting a Crowdsourcing Platform.” Mathematica Policy Memo for the National Center for Science and Engineering Statistics.
3 Von Hippel, Eric (2017) Free Innovation, Cambridge, MA: The MIT Press.
4 Mullinix, K.J., Leeper, T.J., Druckman, J.N. and Freese, J. (2015) ‘The Generalizability of Survey Experiments’, Journal of Experimental Political Science, 2(2), pp. 109–138. doi: 10.1017/XPS.2015.19.
5 de Jong, Jeroen P.J. and von Hippel, Eric A. and Gault, Fred and Kuusisto, Jari H. and Raasch, Christina, Market Failure in the Diffusion of Consumer-Developed Innovations: Patterns in Finland (June 2015). Available at SSRN: https://ssrn.com/abstract=2426498 or http://dx.doi.org/10.2139/ssrn.2426498 page 4.
6 Von Hippel, Eric. Free Innovation. MIT Press, 2017. Page 21.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Jonathan Gordon |
File Modified | 0000-00-00 |
File Created | 2021-01-20 |