Request for approval of an online data collection recruited from Amazon’s Mechanical Turk (MTurk) for testing the salary questio

MTurk.OMB Memo.docx

SRS-Generic Clearance of Survey Improvement Projects for the Division of Science Resources Statistics

Request for approval of an online data collection recruited from Amazon’s Mechanical Turk (MTurk) for testing the salary questio

OMB: 3145-0174

Document [docx]
Download: docx | pdf



Memorandum



Date: August 25, 2017


To: Margo Schwab, Desk Officer

Office of Management and Budget


From: John R. Gawalt, Division Director

National Center for Science and Engineering Statistics

National Science Foundation


Via: Suzanne Plimpton, Reports Clearance Officer

National Science Foundation


Subject: Request for approval of an online data collection recruited from Amazon’s Mechanical Turk (MTurk) for testing the salary question format and confidentiality pledge wording in NCSES surveys

Shape1

The purpose of this memorandum is to inform you that the National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF) plans to conduct exploratory quantitative testing under the generic clearance for improving survey projects (OMB control number 3145-0174). This project will use Amazon’s Mechanical Turk (MTurk) to recruit subjects for an online survey to test the salary question format and confidentiality pledge wording in NCSES surveys.

Background

In an effort to control costs for pretesting questionnaires, agencies are exploring alternate methods. One such method involves pretesting of questions using online convenience samples, whether by including follow-up probes to questions, or by conducting split-ballot experiments. Participants are sometimes recruited using crowdsourcing platforms that pay people to perform small tasks called human intelligence tasks (HITs). Other statistical agencies, notably the Bureau of Labor Statistics (BLS) and the National Institutes of Health-National Cancer Institute (NIH-NCI), have used these crowdsourcing platforms to recruit participants to complete online surveys. NCSES proposes conducting an exploratory study to evaluate the effectiveness and utility of using one specific crowdsourcing platform, Amazon’s Mechanical Turk (MTurk), and an online survey platform, specifically with an eye toward future use for pretesting questionnaires.

NCSES recently conducted an assessment of online convenience sample sources such as crowdsourcing platforms and online survey sample providers (Chandler et al., 2017). From this assessment, we identified MTurk as one of most promising crowdsourcing platforms because it has a larger available sample than other crowdsourcing platforms. It also has the most extensive features, allowing requesters a great deal of control over who may participate in their surveys. In addition, BLS and NIH-NCI have used MTurk in their past efforts to recruit participants to complete online surveys.

The questionnaire that will be tested (Attachment 1) includes two embedded experiments, detailed below. One experiment addresses legislative changes that affect NCSES’s confidentiality pledge. The second experiment examines different ways of asking respondents about their salary.

Experiment 1


The passage of the Cybersecurity Enhancement Act of 2015 required the installation of the Department of Homeland Security’s (DHS) Einstein cybersecurity protection system on all Federal civilian information technology systems by mid-December 2016. Combined with DHS’s stated policies, it also potentially compromises the absolute nature of the Federal statistical system’s (FSS) promises that respondents’ data will be seen only by a statistical agency’s employees or its sworn agents. Consequently, the FSS needs to develop a revised confidentiality pledge(s) that informs respondents of this change in circumstances. (Kaplan, 2017)

Although NCSES has a revised confidentiality pledge in place that was developed through an FSS interagency effort, NCSES plans to occasionally conduct research to evaluate the pledge’s effectiveness and clarity. This research effort examines respondents’ comprehension of, and reaction to, the revised pledge language. We will also explore wording options, to determine which may minimize negative impact. In addition, as many NCSES surveys are collected by other entities (e.g., the U.S. Census Bureau, private contractors), it is important to assess respondents’ understanding of this data collector nuance relative to the pledge.


The questionnaire in Attachment 1 includes three versions of the confidentiality pledge, and is based largely on prior studies conducted by BLS and the U.S. Census Bureau.


Experiment 2


Based on the findings from respondent debriefing interviews for the NCSES’s National Survey of College Graduates (NSCG), the NSCG question of basic annual salary amount was difficult to answer, especially for people who were paid on an hourly, weekly, or monthly basis, or who were self-employed. To capture an accurate salary amount, NCSES needs to conduct research on alternative questionnaire wording for collecting this data. The findings from this research will inform NCSES’s questionnaire wording development for the NSCG and the other NCSES workforce surveys.


The questionnaire in Attachment 1 includes three salary or earned income questions: one question is from the current NSCG, and the other two are based on the American Time Use Survey’s earnings questions, sponsored by BLS and conducted by the U.S. Census Bureau. These questions are not asked sequentially in Attachment 1 in an effort to assess response (in)consistency within respondents.


Methodology


We will use online data collection with participants recruited from MTurk. As MTurk participants are self-selecting, NCSES does not expect them to be statistically representative. As a result, NCSES will not use the data collected in this study to make inferences or produce estimates.


Although the MTurk sample may not be statistically representative, studies using MTurk samples obtain similar results to surveys using probability-based samples (e.g., Mullinix et al., 2015). Samples obtained from MTurk will be studied to examine the internal validity and the demographics of the MTurk participants. All results will be interpreted with caution given the sample was pulled from MTurk, not from a probability-based sample.


The results from the survey will be used to help understand participants’ reactions, responses, and willingness to complete a demographic survey given the confidentiality pledge they read. The study will assess how participants understand and react to the confidentiality language. It will also be used to contribute to the basic survey research literature on topics of privacy and confidentiality in survey. In addition, the study will provide insight on how individuals, especially those paid on an hourly, weekly, or monthly basis, interpret questionnaire items on basic annual earned income.


NCSES will use MTurk to recruit participants for this survey. Once participants are recruited, they will be given a link to the online survey instrument, which will be hosted by NCSES’s contractor for this research, Mathematica Policy Research, Inc. Participants will receive one of three confidentiality statements; all participants will receive the same questions concerning earned income. The data collected as part of this survey will be stored on Mathematica Policy Research, Inc. servers.


Participants


NCSES will use MTurk to recruit 600 participants to ensure approximately 200 participants for each confidentiality pledge condition being tested. In order to be recruited for the survey, individuals must have a bachelor’s degree or higher and live in the United States.


The proposed sample size takes into account the anticipated loss of information due to break-offs, incomplete responses, and participants who do not follow the task instructions. Similar sample sizes have been used for studies of this nature (e.g., Shapiro et al., 2013; Edgar, 2016).


Burden Hours


With 600 participants each taking 15 minutes to complete the survey, a total of 150 burden hours are requested for this study.

Payment to Participants and Data Protections


Participants will receive up to $2.00 for participating in survey, a typical rate for similar tasks.



At the beginning of the survey, participants will be informed of the OMB control number, the expected survey completion time, and the voluntary nature of the study. In addition, participants will be informed that the data they provide in this study will reside on a server outside of the NCSES domain and that NCSES cannot guarantee the protection of survey responses.





Survey Schedule


The tentative schedule for the survey is as follows:

Proposed Date

Activity/Deliverable

September 1, 2017

OMB submission for approval

September 15, 2017

OMB clearance

September 22, 2017

Launch survey

October 6, 2017

Survey due date

October 20, 2017

Final report


Contact Person


Flora Lan

Project Officer

Human Resources Statistics Program

National Center for Science and Engineering Statistics

National Science Foundation

flan@nsf.gov

703-292-4758



Attachment 1: Questionnaire

References

Edgar, J (2016) “OMB clearance 1220-0141 ‘Submission of Materials for Testing of Revised CIPSEA Pledge’.”

Chandler, J., Poznyak, D., Sinclair, M., and Hudson, M. (2017) “Use of Online Crowdsourcing and Online Survey Sample Providers for Survey Operations.” Mathematica Policy Research Report for the National Center for Science and Engineering Statistics.

Kaplan, R (2017) “OMB clearance 1220-0141 ‘Cognitive and Psychological Research’”

Mullinix, K.J., Leeper, T.J., Druckman, J.N. and Freese, J. (2015) ‘The Generalizability of Survey Experiments’, Journal of Experimental Political Science, 2(2), pp. 109–138. doi: 10.1017/XPS.2015.19.

Shapiro, D. N., Chandler, J., & Mueller, P. A. (2013). Using Mechanical Turk to study clinical populations. Clinical Psychological Science1(2), 213-220.



4


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJonathan Gordon
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy