Memo

Census Bureau Testing OMB.docx

Generic Clearance for Questionnaire Pretesting Research

Memo

OMB: 0607-0725

Document [docx]
Download: docx | pdf

OMB Control Number: 0607-0725

Expiration Date: 8/31/2016


  1. Introduction

The passage of the Cybersecurity Enhancement Act of 2015 requires the installation of the Department of Homeland Security’s Einstein cybersecurity protection system on all Federal civilian information technology systems by mid-December 2016.  Combined with DHS’ stated policies, it also potentially compromises the absolute nature of the Federal statistical system’s (FSS) confidentiality pledges by no longer enabling statistical agencies to pledge that respondents’ data will be seen only by a statistical agency’s employees or its sworn agents.  Consequently, the FSS needs to develop a revised confidentiality pledge(s) that informs respondents of this change in circumstances.


To optimize the effects of a revised confidentiality pledges, we need to conduct research to determine respondent’s comprehension of, and reaction to, the revised pledge language. We will also explore wording options, to determine which may minimize the negative impact. The Census Bureau will focus on testing the revised confidentiality pledge with household respondents, as other agencies are testing the wording for use in establishment surveys. The Census Bureau will be working with an interagency group, sharing research designs and findings to assist other statistical agencies in conducting similar testing for other respondent groups. Agencies for whom the Census Bureau regularly conducts surveys as well as those with more limited pretesting capabilities will use results from the Census Bureau's testing when making changes to their own confidentiality pledges. It is possible that the tested changes may have differential effects on various classes of respondents such as households, small establishments, large establishments/enterprises, farmers and ranchers, educational or medical institutions, etc. The Census Bureau has three versions of the revised language that we will test (Attachment A).

  1. Methodology

This study will use a multi-prong approach to collect the necessary information in the most time-efficient way possible.

  1. Primary purpose collection.

Participants will be recruited for the sole purpose of testing the confidentiality language. We will recruit typical cognitive interview participants who can serve as proxies for household survey respondents. A standard cognitive interview approach will be used, with the additional use of eye tracking software to monitor the reading behavior of some participants. Interviews will be conducted in-person at the Center for Survey Measurement's usability lab. The interview is expected to take less than one hour, and the draft protocol is enclosed and the full study plan is in Attachment B.

  1. Add-on to other studies.

Participants participating in other Census Bureau cognitive research efforts will be given the confidentiality protocol at the end of the other study. The main studies, to be conducted independent of the confidentiality research as needed, will either use 9 or fewer participants or have their own OMB clearance.

  1. Online data collection: Survey Monkey.

In order to obtain reactions from a greater number and more diverse respondent pool, a sample of households will be selected from the Census Contact Frame to receive email invitations to complete a Survey Monkey survey.. For the Survey Monkey portion of the study, we will randomly assign respondents to read one of the versions of the confidentiality pledge. Participants will be asked a series of questions to gauge their recall and comprehension of the confidentiality pledge, as well as its effect on their willingness to provide information to the Census Bureau. Participants will then be asked several demographic questions to provide background information about respondents. The study plan is included in Attachment C and the questionnaire is enclosed.



  1. Participants

  1. Primary purpose collection


Up to 40 participants will be recruited with the sole purpose of testing the confidentiality language. We will recruit from the existing CSM participant database, as well as from online recruitment advertisements.


  1. Add-on to other studies


Up to 10 participants will complete the confidentiality testing as a part of other Center for Survey Measurement research.


  1. Online data collection: Survey Monkey


Up to 10,000 MAFIDs will be selected to receive email invitations for the study. Each participant will be randomly assigned to read one of four versions of the confidentiality pledge.


IV. Burden Hours

The burden hours for all prongs of this research are shown in Table 1.







Table 1. Estimated Burden Hours


# of Participants Screened

Minutes per participant for Screening

Total Screening Burden

Maximum number of Participants

Minutes per participant for data collection

Total Collection Burden

Total Maximum Burden (Screening + Collection)

  1. Primary purpose collection

80

5

400

40

60

2,400

2,800

  1. Add-on to other studies

10

0

0

10

60

600

600

  1. Online data collection: Survey Monkey

10,000

5

50,000

200

20

4,000

54,000

Total Burden

57,400 minutes


956.7 hours







V. Payment to Participants

  1. Primary purpose collection


Participants will be paid an honorarium of $40.


  1. Add-on to other studies


Participants who complete this research at the end of another study in the CSM cognitive laboratory will not receive any additional payment for completing this research study.


  1. Online data collection: Survey Monkey


Participants will receive no compensation for their participation.



VI. Data Confidentiality

    1. In person collection (Primary purpose collection, add-on to other study collection, and focus groups)

Participants will be given a consent form before the start of the study. The consent form will use the current confidentiality wording, as prior research has shown that participants do not focus on the language on the form and we do not expect it to interact with the in-depth probing of the revised language.

    1. Online collection (Survey Monkey)

Participants will be recruited by email. Once participants are recruited into the study, they will be given a link to the survey, which is hosted by Survey Monkey. The data collected as part of this study will be stored on Survey Monkey servers.

Participants will be informed of the OMB number and the voluntary nature of the study.

This is a survey for the US Census Bureau. This voluntary study is being collected by the Census Bureau under OMB No 0607-0725. This survey will take approximately 10 minutes to complete. Your participation is voluntary and you have the right to stop at any time.


We are looking for information about how respondents answer our surveys. Please take your time as you answer these questions. The information you provide will contribute to valuable research at the Census Bureau.


This survey is being administered by Survey Monkey and resides on a server outside of the Census Bureau Domain. The Census Bureau cannot guarantee the protection of survey responses and advises against the inclusion of sensitive personal information in any response. By proceeding with this study, you give your consent to participate in this study.



  1. Attachments

Attachment A: Proposed revised confidentiality language

Attachment B: Study Plan for In Person Interviews

Attachment C: Study Plan for Survey Monkey web survey



Attachment A: Proposed revised confidentiality language



Note: Bolded font is for review purposes and will not be shown to participants.



Standard Confidentiality Pledge

The U.S. Census Bureau is required by U.S. law to keep your answers confidential. This means that the Census Bureau cannot give out information that identifies your or your household to anyone, including other government agencies.

We are conducting this survey under the authority of Title 13, United States Code, Sections 141 and 193. Federal law protects your privacy and keeps your answers confidential (Title 13, United States Code, Sections 9 and 214).


Conservative Confidentiality Revision

The Census Bureau is required by U.S. law to protect your information (Title 13, United States Code, Sections 9 and 214). This means that the Census Bureau will only use your responses for statistical purposes and cannot give out information that identifies your or your household. Your data are further protected by Department of Homeland Security through security monitoring of the systems that transmit your data.


Liberal Confidentiality Revision

The Census Bureau is required by U.S. law to protect your information (Title 13, United States Code, Sections 9 and 214). This means that the Census Bureau will only use your responses for statistical purposes and cannot give out information that identifies your or your household. Further, Census Bureau information systems are protected by Federal employees and contractors through security monitoring of the systems that transmit your data.



Online Terms & Conditions

You are accessing a United States Government computer network. Any information you enter into this system is confidential and may be used by the Census Bureau for statistical purposes, as well as for other uses, such as improving the efficiency of our programs. If you want to know more about the use of this system, and how your privacy is protected, visit our online privacy webpage at http://www.census.gov/privacy/privacy-policy.html. Use of this system indicates your consent to us collecting, monitoring, recording, and using the information that you provide for any lawful government purpose.


So that our website remains accurate, safe, and available to you and all other visitors, we monitor network traffic to identify unauthorized attempts to access, upload, or change information or otherwise cause damage to the web service. Your usage of this system is likely to be monitored, recorded, and subject to audit. If you are not using the network connection for authorized purposes, then it is a violation of Federal law and can be punished with fines or imprisonment (PUBLIC LAW 99-474).



Attachment B: In Person Study Plan

  1. Study Objective

Do people read / remember the part of the letter that includes information about monitoring of the data? Is this different by condition?

Do people report that they would not want to give data based on the wording? Is this different by condition?


  1. Background and rationale

The Census Bureau is part of an inter-agency team that is investigating respondent behavior towards revisions to the confidentiality language that appears on the various materials that respondents may see including the initial letter, the letter that is handed to NRFU respondents and that appears on the initial login screen for the self-response instrument.


  1. Methodology

This will be in-person usability tests using eye-tracking capabilities for half of the “paper” materials sessions and eye tracking for all of the online web sessions. The usability study will follow a multi-method approach that includes free recall, word recognition test, forced choice test, in your own words debriefing, general debriefing probes and satisfaction questions. This methodology was first developed by the Census Bureau usability lab for a study with the economic directorate (Wang, Olmsted-Hawala, Willimack, Stack, 2016).


  1. Experimental design


There will be three paper conditions and one electronic website. The four conditions include: a control (current wording), a conservative wording similar to the BLS language, a liberal wording, similar to the BLS language, and a modified version of the terms and conditions language that appears on the login page of the 2016 Census Test. These will be crossed by half of the participants receiving the paper materials and half of the participants receiving an online pdf version of the letter to allow data collection with eye tracking. However, the terms & conditions that appears for the online 2016 Census Test will not be on paper as that will never be something that participants are exposed to in real life.


  1. Participants

There will be 10 participants per condition. For each letter condition, 5 will be exposed to paper and 5 to an electronic .pdf so that eye tracking on a laptop can occur. For the website, all 10 will be electronic. We will aim to recruit participants of various demographic characteristics so we get a range of participants including age, sex, household composition, race and education. The following approaches will be used in recruitment:

    1. Word of mouth, craigslist ads, flyers posted in libraries, and the recruiting database, where appropriate

    2. Inter-agency panel assistance with recruiting by contacting their sources


  1. Testing task

For the paper letter versions, participants will be asked to read the letter through once and then will be asked a series of probes about what they have read. They will then be asked to read a specific portion of the letter over again and again asked a few follow up probes. See the attached draft protocol. If the lab equipment allows, we may have participants in the paper letter version wear eye-tracking glasses. These are non-invasive and should not alter reading behavior.


For the electronic task, participants will be asked to read the electronic wording on a laptop while their eyes are being tracked. Participants will be asked a set of probes about what they have read. They will then be asked to read a specific portion of the electronic web page and again asked a series of follow up probes. See the attached draft protocol. The laptop eye tracking is also non-invasive but because the letter is put into a .pdf format it will not be precisely true to what they receive at home, and they user interactions with the materials may differ.


  1. Measures

Eye tracking - For the web condition we will identify eye fixation (instances where the eyes are relatively still -- an indication of interest/attention) differences by condition. More fixations could indicate confusion or engagement, depending on the context. After session probes and debriefing comments will be used to interpret longer and shorter fixation amounts.


For the eye tracking we will also investigate the number of fixations per character where more fixations per character in a specific condition could be an indication that the section was more difficult to understand/was more burdensome than conditions with fewer fixations per character. Fewer fixations per character in a paragraph can also be an indication that a passage was skimmed over by the participant. After session probes and debriefing comments will be used in conjunction with fixation per character data.


We will analyze user comments by condition and identify if any of the conditions would be more or less likely to impact response rates. We will identify if any of the language in the different conditions lead to respondents expressing more concerns than other conditions. We will identify if users report that one condition would lead them to refuse to give their data, or increase their unwillingness to give their data.

Eye-tracking data of the first pass through the online instrument login page can be compared to eye-tracking data of the second pass through the online instrument login page. We will learn whether participants notice, read (and with the follow up probes recall) the confidentiality information initially without being told to read that specific section. If participants did not read over the passage in the first pass, (a high possibility for the online login page) this information will add to the degree of the impact any wording changes in the confidentiality sections will have on participant response rates. That is, if participants do not notice the passage initially on their own, it may not have as great of an effect on behavior.


We will have efficiency data to see if any of the conditions take longer to read. An increase in reading duration could be an indicator that the passage is more difficult to understand.

Finally, we will collect satisfaction measures and see if there are differences by condition.


References

Wang, L., Olmsted-Hawala, E., Willimack, D., and Stack, E. (2016). “A multi-method approach to evaluate participants’ perception of the voluntary statement and burden estimate in web survey and pre-notification letters.” Presented at FEDCASIC, May 3 2016.



Attachment C: Online Data Collection Study Plan



  1. Study Objective

Do people read / remember the part of the confidentiality pledge that includes information about monitoring of the data? Is this different by condition?


Do people report that they would not want to give data based on the wording of the pledge? Is this different by condition?


  1. Background and rationale

The Census Bureau is part of an inter-agency team that is investigating respondent behavior towards revisions to the confidentiality language that appears on the various materials that respondents may see including the initial letter, the letter that is provided to NRFU respondents and that appears on the initial login screen for the self-response instrument.


  1. Methodology

Ten thousand sampled MAFIDs from the Census Bureau Contact Frame will be contacted via email with an invitation to complete an online survey. Up to two reminder emails will be send to email addresses not associated with a survey response. Data will be collected on the Survey Monkey platform (no PII will be requested).


  1. Experimental design


This is a between-subjects study design with four conditions. The four conditions include: a control (current wording), a conservative wording similar to the BLS language, a liberal wording similar to the BLS language, and a modified version of the terms and conditions language that was on the login page of the 2016 census test.


  1. Participants

There will be approximately 50 participants per condition (assuming a 2% response rate). Use of the Contact Frame to select a sample should ensure that sampled participants represent a range of demographic characteristics including age, sex, household composition, race and education (although sampling will be limited to individuals who have an email address). All invitations and reminders will be issues via email.


  1. Testing task

In a Survey Monkey web survey, participants will be asked to read one of four versions of the confidentiality language and then to answer questions about the language and their reactions to it. See the attached draft protocol. Some demographic information will also be requested, using ranges and categories to avoid collection of PII.


  1. Measures

Measures include a rephrasing of the confidentiality language in the respondent's own words, willingness to provide information to the Census Bureau based on the language, and closed-ended questions about the meaning of the text. Demographic questions will ask about sex, age, education, and employment status. See the attached draft protocol.


We will analyze user responses by condition and identify if any of the conditions would be more or less likely to impact response rates or understanding. We will identify if any of the language in the different conditions lead to respondents expressing more concerns than other conditions. We will identify if users report that one condition would lead to them to refuse to give their data, or increase their unwillingness to give their data.


Although item-level timing is not available in Survey Monkey, total time to complete the survey will be examined as an alternative possible indicator of text difficulty or respondent confusion.



9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorEdgar, Jennifer - BLS
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy