Attachment D ● Supporting evidence on study to assess strategies to
improve response rates
101
Impact Study of Alternative Strategies to Increase Response Rates
Objective
Given declining national trends in survey response rates (National Research Council 2013), the
National Science Foundation (NSF) is interested in rigorously testing the effect of alternative followup strategies with nonrespondents to increase survey response rates and minimize response rate bias
in online surveys. This study will examine whether harder-to-reach participants are more likely to
respond to surveys if they receive a personalized follow-up email from their principal investigator
(PI) or are contacted via social media.
Overall Approach
This study will examine the relative effectiveness of two higher-cost survey follow-up methods for
NSF participants who do not initially respond to the survey. These harder-to-reach participants will
be randomly assigned to one of three conditions:
1. Receiving a follow-up email automatically generated by the NSF Education and Training
Application (ETAP) system participants had used to apply to the program (the low-cost control
condition)
2. Receiving the automatic follow-up email (described above), plus a follow-up email from the
principal investigator of the award participants had been involved in (the first higher-cost
treatment condition)
3. Receiving the automatic follow-up email (described above), plus a follow-up message from NSF
ETAP program staff via social media (the second higher-cost treatment condition)
The study will include all ETAP Sites (that is, NSF awards pilot-testing the ETAP system) and
leverage one of the participants surveys administered through the systems (see Timing section for
additional information). Those who do not initially respond to the survey will be randomized to
receive additional follow-up. We expect about 30 percent of participants to not initially respond to
the survey. 4
Treatment and Control Conditions
The survey will be web based, since university students are more likely to respond to web surveys
than paper surveys (Shih & Fan, 2008), and all participants will receive prenotifications via SMS (if
have consented to receive messages when applying to the program) or via email (if they have not
consented SMS), and survey invitations via email. Prior research has shown that undergraduate and
graduate students are significantly more likely to respond to a survey if they are prenotified about the
survey and are substantially more likely to respond if they are prenotified via SMS rather than via
email. Students are also more likely to respond if they receive the survey invitation link via email
rather than SMS (Bosnjak, Neubarth, Couper, & Kaczmirek, 2008). Because these methods have
This is based on an analysis of the pilot test of the exit survey previously tested with a sample of 2019 participants who used
the REU data system—the predecessor of the ETAP system—to apply to the NSF’s Research Experiences for Undergraduates
(REU) program, which found that 28% of participants did not respond to the survey (Mathematica 2020).
4
102
been shown to be effective in similar populations and are relatively low cost, we will apply these
methods in all three conditions (see Table 1 for a summary of the control and treatment conditions).
Table 1. Control and treatment conditions
Control:
Email from ETAP
System
Treatment 1:
Email from
ETAP + Email
from PI
Treatment 2:
Email from ETAP
+ Email through
social media
Prenotification via SMS or email
√
√
√
Survey invitation via SMS and email
√
√
√
Survey administered via web
√
√
√
√
√
√
Component
Follow-up reminder for nonrespondents via
Email from NSF ETAP system (automatic systemgenerated weekly reminders sent to nonrespondents
until survey closes)a
Email from principal investigatora
Email through social media
a
Draft emails are included at the end of this document.
√
√
Other research has shown that survey participants are more likely to respond if they receive a
follow-up communication, but there is little research on the impacts of different follow-up
communication strategies (Neal, Neal & Piteo, 2020; Robbins et al., 2018). The study will test the
impact of two higher-cost follow-up strategies relative to a low-cost follow-up strategy. The two
higher-cost strategies include: 1) encouraging PIs to send follow-up emails to their students and 2)
contacting students through social media (provided as part of their application to the program) to
remind them to complete the survey. The low-cost follow-up strategy consists on automatic system
emailing to remind nonrespondents to complete the survey. Some studies have shown a large impact
of receiving a phone call, but on different samples than are being tested in this study (like hard-toreach school principals) (Neal, Neal & Piteo, 2020). Last, because participants and PIs involved in
the study are current or past beneficiaries of NSF funding, they will not receive monetary incentives
to complete the survey or participate in the study for any condition.
Analysis and Minimum Detectable Effects
We will conduct two types of analyses. First, the implementation analysis will measure the
proportion of PIs that sent reminders to students (of those that were encouraged to send reminders)
by requesting that PIs copy them on any emails they send to their students to measure
implementation. We will also document the proportion of nonrespondents who could be reached
through a social media account. 5 Second, the impact analysis will measure the impact of being
assigned to each of the three conditions. The study will compare the two higher-cost treatment
conditions to the control condition, and if sample sizes allow, we will also compare each of the
In a prior survey effort, NSF recently reached students who participated in international research experiences
funded by the REU and IRES awards made in 2013 (Speroni 2020; 2021). In this survey administration, we found
that 60 percent of nonrespondents had a LinkedIn account.
5
103
conditions to one another. The study will also estimate the impact of receiving a follow-up reminder
from a PI (i.e. a treatment-on-treated analysis).
With two treatment conditions to be tested jointly, the study will need at least 1,575 harder-to-reach
participants to be able to detect differences of at least 8 percentage points between each of the
treatment arms (Table 2). For the study to have 1,575 harder-to-reach participants, it would need to
include about 315 Sites. This assumes that 50% of participants will be harder-to-reach (i.e. not
respond initially to the survey) and that there will be 10 participants per Site. If we test one
treatment condition at a time (leveraging the multiple years this survey will be implemented), we will
need fewer participants (1,050 harder-to-reach participants) to be able to detect differences between
treatment and control of similar magnitude (at least 8 percentage points). More participants would
be needed to detect impacts that are smaller.
Table 2. Minimum number of hard-to-reach participants needed to detect an effect of 8 percentage
points or greater
Minimum detectable effect of:
8 percentage points
12 percentage points
1,575
675
1,050
450
Two treatment conditions
Treatments vs. control
One treatment condition
Treatment vs. control
Note:
Hard-to-reach participants are those who do not initially respond to the survey. For these power
calculations, we assume that the average response rate among nonrespondents is 40% and the
proportion of individual-level variance in response rates explained by covariates (i.e., R-squared)
is 0.13. Both statistics are from the exit survey pilot (tested with a sample of 2019 participants
who used ETAP system to apply to the program).
Timing
Depending on the ETAP pilot recruitment success, we will determine whether to test both
treatment conditions jointly leveraging the 2022 exit survey administration or implement one
treatment condition in the 2022 and another condition in the 2023 exit survey. We will also consider
implementing one treatment condition leveraging the high-stakes employment survey planned for
2023. This survey is expected to have a lower response rate initially (as is tracking students several
years after program participation), potentially enabling larger impacts of the strategies tested.
References
Bosnjak, Neubarth, Couper, Bandilla & Kaczmirek (2008). Prenotification in Web-Based Access
Panel Surveys: The Influence of Mobile Text Messaging Versus E-Mail on Response Rates and
Sample Composition. Social Science Computer Review 26(2).
National Research Council. (2013). Nonresponse in Social Science Surveys: A Research Agenda.
Roger Tourangeau and Thomas J. Plewes, Editors. Panel on a Research Agenda for the Future of
Social Science Data Collection, Committee on National Statistics, Division of Behavioral and Social
Sciences and Education. Washington, DC: The National Academies Press.
104
Neal, Neal & Piteo (2020). Call Me Maybe: Using Incentives and Follow-ups to Increase Principals’
Survey Response Rates. Journal of Research on Educational Effectiveness 13(4).
Mathematica (2020). “Research Experiences for Undergraduates (REU) Data System Pilot Analyses:
REU Participant Exit Survey.” Findings submitted to NSF. Washington, DC: Mathematica.
Robbins, Grimm, Stetcher & Opfer (2018). A Comparison of Strategies for Recruiting Teachers into
Survey Panels. Sage Open, 8(3).
Shih & Fan (2008). Comparing Response Rates from Web and Mail Surveys: A Meta-Analysis. Field
Methods 20(3): 249-271.
Speroni, Cecilia (2020). Evaluation of the National Science Foundation's International Research
Experiences for Students (IRES) Program: Findings from a Survey of Former Participants.
Washington, DC: Mathematica.
Speroni, Cecilia (2021). National Science Foundation’s International Research Experiences for
Undergraduates: A Comparative Analysis of the IRES and REU Programs. Washington, DC:
Mathematica.
Draft Emails
All conditions: Prenotification SMS
Hi, it’s NSF Education and Training Application (ETAP) system. We will be sending you a short
survey soon to learn more about your experiences and satisfaction with the program. Please
respond! Your survey response will help improve the NSF . Go to
www.nsfetap.org/ to learn more about the upcoming survey.
All conditions: Email text for initial email with survey link
Subject line: NSF
Dear ,
Professor of identified you as a participant of the
National Science Foundation (NSF) .
We hope you will complete a short survey about your experiences and satisfaction with the
program.
• You’ll be done quickly. The survey only takes about 10 minutes to complete.
•
Participation is voluntary, but we need you! Your response is critical for producing valid
estimates that can help improve the NSF .
•
The questions are not sensitive and you are free to skip any of them.
•
Your answers will be kept confidential and used by NSF or its contractors/grantees
for research and evaluation purposes only.
•
Not sure if you participated in this NSF program? Click the link to the survey below and
answer the first few questions to determine whether you are eligible for this survey.
Please complete the survey by .
105
Questions? Contact the study team at help@nsfetap.org or 1-800-232-8024.
Thanks in advance!
ETAP system administrator
help@nsfetap.org
1-800-232-8024
Click here to begin this survey
All conditions: SMS text for initial email with survey link
Hi ,
Please respond a short survey @ https:// or use the link sent to your email.
NSF ETAP system administrator
help@nsfetap.org
1-800-232-8024
STOP=TextOptOut
All conditions: Email text of Reminders (for nonrespondents)
Use same text at initial email
Treatment 1: Study instructions for PIs
Subject: NSF – your help contacting participants is needed
Dear Prof. [PI Name Lastname],
As part of the ETAP pilot, NSF is conducting a study to rigorously assess the effectiveness of
alternative follow-up strategies with nonrespondents to increase survey response rates and minimize
response rate bias in online surveys. One of these strategies is receiving an email from you
encouraging them to respond to the survey. We need your help in this study!
We hope you can encourage the participants named below to complete the NSF survey.
Firstname
Lastname
Email1 Email2
Firstname
Lastname
Email1 Email2
As of these participants have not yet responded to the survey. Please, reach out to these and
only these participants. NSF had randomly assigned nonrespondents to different strategies, and your
adherence to instructions is critical to the integrity of the study!
So that we can track implementation, we ask you that you please copy help@nsfetap.org in your
email to participants. At the end of this email we provide some suggested language you can use to
contact participants. You may choose to contact participants separately or send one email to
everyone on the list.
If you have any questions, please do not hesitate to contact the ETAP help desk or the at (NSF contracting office representative).
Thank you for your continued support it this ETAP pilot!
NSF ETAP system administrator
106
help@nsfetap.org
1-800-232-8024
Draft email template (informal)
Hi ,
Hope all is well!
It has come to my attention that you have not yet responded to the survey NSF recently sent you
about your experiences in the . I would really appreciate if you could take a few
minutes to respond to the survey!
You should have received it in an email from noreply@nsfetap.org to your primary email associated
with your ETAP account. If you can’t find it, please contact the ETAP helpdesk at
help@nsfetap.org or 1-800-232-8024.
Thanks!
107