Evaluation of Employment Coaching for TANF and Related Populations
OMB Information Collection Request
0970-0506
Supporting Statement
Part A
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officers:
Hilary Bruck
Victoria Kabak
Status
of study:
The
revisions in this Supporting Statement reflect changes requested
as part of a non-substantive change request submitted to OMB in
October 2020. This
request is part of an ongoing evaluation (OMB #0970-0506).
Previous information collection requests (ICR) covered data
collection activities for both an impact and an implementation
study that are being conducted under this evaluation. Approved
data collection activities for the impact study include: (1)
baseline data collection and (2) the first and second follow-up
surveys. Approved data collection activities for the
implementation study include: (1) semi-structured management,
staff, and supervisor interviews; (2) a staff survey; (3) in-depth
participant interviews; (4) staff reports of participant service
receipt; and (5) video recordings of coaching sessions.
This
submission requests approval of non-substantive changes to two
previously approved implementation study data collection
instruments to systematically capture descriptive information
related to the 2019 novel coronavirus disease (COVID-19) pandemic.
It requests a slight increase to the incentive amount for
completion of the additional in-depth participant interviews, and
to the estimated burden based on conducting this additional data
collection. It also requests a change to the incentive structure
and amount for two sites for the impact evaluation’s two
follow-up surveys and minor revisions to the survey instruments
and notifications to reflect the changes. The justification for
these non-substantive change requests is included in Attachment P.
Nonsub change request_Coaching Evaluation_Oct 2020. What
is being evaluated (program and context) and measured: The
evaluation is assessing the effectiveness of employment coaching
interventions in helping TANF and related populations obtain and
retain jobs, advance in their careers, move toward
self-sufficiency, and improve self-regulation skills and overall
well-being.
Type
of study:
The
evaluation includes an impact study (individuals are randomly
assigned to treatment and control conditions) and an
implementation study. Utility
of the information collection:
Coaching
may be a promising way to help low-income or at-risk people become
economically secure; however, there is little evidence on the
effectiveness of coaching for improving employment and
self-sufficiency among TANF and other low-income populations. This
evaluation will describe six coaching interventions and assess
their effectiveness in helping people obtain and retain jobs,
advance in their careers, move toward self-sufficiency, and
improve self-regulation skills and overall well-being. This
information can be used by policymakers to inform funding and
policy decisions and by practitioners to improve employment
programs. If this information collection does not take place,
policymakers and providers of coaching programs will lack
high-quality information on the effects of the interventions, as
well as descriptive information that can help refine the operation
of coaching interventions so they can better meet participants’
employment and self-sufficiency goals. This
non-substantive change request is to collect information regarding
how coaching programs in the evaluation changed as a result of
COVID-19 and what study participants’ experiences with the
pandemic have been. This information is important to understand
the treatment participants received during this time, to fully
contextualize the evaluation’s findings and to account for
the pandemic in analysis. Understanding the changes made by and
the lessons learned from these programs will also help inform
other programs’ policies and implementation as the country
continues to respond to the pandemic and other future public
health emergencies. Additionally, the response rates for the
follow-up surveys are at risk of being much lower than anticipated
because in-person location has stopped due to COVID-19.
Previously, OMB approved an increase in the survey incentive
amount for four of the six sites in the evaluation; we are now
requesting the same increase for the other two sites in order to
help increase the response rates and decrease the program-control
group response rate differential, to avoid bias in the estimates
of the programs’ effectiveness.
A1. Necessity for the Data Collection
The Office of Planning, Research, and Evaluation (OPRE) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for a non-substantive change request in order to systematically capture descriptive information related to the 2019 novel coronavirus disease (COVID-19) pandemic and to change the incentive structure and amount for two sites for the impact evaluation’s follow-up surveys. The justification for these non-substantive change requests is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020. The objective of this evaluation is to provide information on coaching interventions implemented by Temporary Assistance for Needy Families (TANF) agencies and other employment programs. The evaluation will describe up to six coaching interventions and assess their effectiveness in helping people obtain and retain jobs, advance in their careers, move toward self-sufficiency, and improve their overall well-being. The evaluation includes both an experimental impact study and an implementation study.
Previous information collection requests (ICRs; OMB #0970-0506) covered data collection activities for both an impact and an implementation study. Approved data collection activities for the impact study include: (1) baseline data collection and (2) the first and second follow-up surveys. Approved data collection activities for the implementation study include: (1) semi-structured management, staff, and supervisor interviews; (2) a staff survey; (3) in-depth participant interviews; (4) staff reports of participant service receipt; and (5) video recordings of coaching sessions. This current non-substantive change request seeks approval for changes to two previously approved implementation study data collection instruments to reframe some questions and add questions to systematically capture implementation information related to COVID-19. The requested changes are reflected in Attachment D. Semi-structured management, staff, and supervisor interviews_rev and Attachment F. In-depth participant interviews_rev. It also requests a slight increase to the incentive amount for the additional in-depth participant interviews, and to the estimated burden based on conducting additional interviews. Finally, this request seeks approval for changes to the incentive structure and amount for two sites for the impact evaluation’s follow-up surveys and minor revisions to the survey instruments (Attachment C. First follow-up survey_rev and Attachment N. Second follow-up survey_rev) and notifications (Attachment I. Notifications_rev) to reflect the requested changes.
Traditionally, TANF agencies and other employment programs build job search skills, prescribe further education and training, and address barriers to employment, such as those caused by mental health problems or lack of transportation and child care. Despite a variety of strategies implemented over several decades, assistance provided by these programs is insufficient to enable many participants to achieve self-sufficiency (Hamilton 2012). In response, some researchers have suggested that employment programs seeking to help low-income populations find and keep jobs take an alternative approach in which traditional case management is replaced with or supplemented by employment coaching strategies. Long recognized as an effective approach to helping people meet career and personal goals, coaching has drawn increasing interest as a way to help low-income people gain and maintain employment and realize career and family goals (Annie E. Casey Foundation 2007).
Coaching strategies are typically informed by behavioral science and focus on the role of self-regulation skills in finding and keeping a job. Self-regulation skills allow people to intentionally control thoughts, emotions, and behavior (Blair and Raver 2012). They include executive function (the ability to process, filter, and act upon information), attention, metacognition, emotion understanding and regulation, motivation, grit, and self-efficacy. Recently, research suggests that poverty can hinder the development and use of self-regulation skills (Mullainathan and Shafir 2013). Research has shown that coaching is a promising way to help low-income or at-risk people. For example, an evaluation of two financial coaching programs for low- and moderate-income people found that the programs reduced debt and financial stress, and increased savings (Theodos et al. 2015). Similarly, coaching has been found to be effective in assisting people with disabilities to obtain employment. The Individual Placement and Support (IPS) model was designed to help clients with disabilities plan for, obtain, and keep jobs consistent with their goals, preferences, and abilities (Wittenburg et al. 2013). In experimental studies, IPS has improved employment outcomes across multiple settings and populations (Davis et al. 2012; Bond et al. 2015). However, there is little evidence on the effectiveness of coaching for improving employment and self-sufficiency among TANF and other low-income populations.
Drawing on the history of coaching in other contexts, some employment programs for low-income people—administered by TANF, other public agencies, and nonprofit organizations—have begun to provide coaches as a means of improving employment and self-sufficiency (Pavetti 2014). These coaches work with participants to set individualized goals and provide support and feedback as they pursue their goals (Ruiz de Luzuriaga 2015; Pavetti 2014). The coaches may take into account self-regulation skills in three ways. First, they may teach self-regulation skills and encourage participants to practice them. This may occur by helping the participant set goals, determining with the participant the necessary steps to reach those goals, modeling self-regulation skills, and providing rewards or incentives. Second, they may help participants accommodate areas where their self-regulation skills are less developed. For example, staff may help participants choose jobs that align well with their stronger self-regulation skills or suggest participants use a cell phone app to remind them of appointments. Third, the coaches may reduce factors that hinder the use of self-regulation skills. They may do this by teaching stress-management techniques or reducing the paperwork and other burdens placed on the participant by the program itself.
To learn more about these practices, OPRE contracted with Mathematica Policy Research and Abt Associates to evaluate the following coaching interventions: MyGoals for Employment Success in Baltimore; MyGoals for Employment Success in Houston; Family Development and Self-Sufficiency program in Iowa; LIFT in New York City, Chicago, and Los Angeles; Work Success in Utah; and Goal4 It! in Jefferson County, Colorado. The follow-up surveys will contribute to the impact study, which will address the effectiveness of each coaching intervention in improving employment, self-sufficiency, and self-regulation outcomes as well as other measures of well-being.
Legal or Administrative Requirements that Necessitate the Collection
There are no legal or administrative requirements that necessitate the data collection. The collection is being undertaken at the discretion of ACF.
A2. Purpose of Survey and Data Collection Procedures
Overview of Purpose and Approach
The information collected through the follow-up surveys will be used to learn about the effectiveness of coaching interventions at improving outcomes for participants in employment programs serving TANF and related populations. This information can be used by policymakers to inform funding and policy decisions. If the information collection does not take place, policymakers and providers of coaching programs will lack high quality and long-term information on the effects of the interventions.
Research Questions
The follow-up studies will provide data for the impact study to answer the following research questions:
Do the coaching interventions improve participants’ employment outcomes (such as employment, earnings, job quality, job retention, job satisfaction, and career advancement); self-sufficiency (income, public assistance receipt); and other measures of well-being?
Do the coaching interventions improve measures of self-regulation? To what extent do impacts on self-regulation explain impacts on employment outcomes?
Are the coaching interventions more effective for some groups of participants than others?
How do the impacts of the coaching interventions change over time?
Study Design
The study is evaluating the following coaching interventions: MyGoals for Employment Success in Baltimore; MyGoals for Employment Success in Houston; Family Development and Self-Sufficiency program in Iowa; LIFT in New York City, Chicago, and Los Angeles; Work Success in Utah; and Goal4 It! in Jefferson County, Colorado.
MyGoals for Employment Success in Baltimore and Houston
MyGoals is targeted to unemployed or underemployed adults between the ages of 18 and 56 who are receiving housing support from the housing authority. Its objective is to improve self-regulation skills and help participants find solutions to their problems in the short-term while increasing their overall economic security and decreasing their reliance on public assistance in the long-term. MyGoals is a three-year program. Coaches meet with participants every three to four weeks during the first two years and are encouraged to check in between sessions. They meet with participants less frequently in the third year.
Family Development and Self-Sufficiency Program
Iowa’s Department of Human Rights implements the Family Development and Self-Sufficiency (FaDSS) program through contracts with 17 local agencies across the state. This evaluation includes a subset of these local agencies. FaDSS is funded through the TANF block grant and serves only TANF participants. The objective of the program is to help families achieve emotional and economic independence. FaDSS is targeted to TANF recipients with barriers to self-sufficiency. The coaches meet with participants in their homes at least twice in each of the first three months and then monthly starting in the fourth month, with two additional contacts with the family each month. FaDSS expects to be able to enroll 1,000 people rather than 2,000 for the evaluation.
LIFT – New York City, Chicago, and Los Angeles
LIFT is a national non-profit that provides coaching and navigation services to clients in New York City, Chicago, Los Angeles, and Washington, DC. For the purposes of our evaluation, the New York, Chicago, and Los Angeles subsites will be aggregated and considered a single LIFT site. LIFT’s goal is to help clients find a path toward goal achievement and financial security by matching them with coaches. Clients set short-term and long-term goals and the coach helps clients build an action plan to achieve those goals. The LIFT coaching approach is nondirective and allows clients to choose the goals and milestones they want to work on. LIFT clients are expected to meet with a coach on a regular basis for up to two years. During the first month of the program, clients typically have two or three in-person sessions with a coach. After the first month, clients meet with coaches monthly to discuss progress toward goals and obstacles that are impeding progress. These sessions typically last 60 to 90 minutes.
Work Success – Utah
Work Success is an employment coaching program administered by Utah’s Department of Workforce Services—an agency that oversees TANF, Supplemental Nutrition Assistance Program, Workforce Innovation and Opportunity Act, and other workforce programs. The program is offered statewide in about 30 employment centers (American Job Centers) with one or two coaches per center. The program served about 1,350 clients in 2016, largely concentrated in the greater Salt Lake City area. The objective of the program is to improve employment outcomes by focusing on job placement. Each participant is assigned a coach, who works with him/her to set goals and review progress toward goals. The Work Success coach meets with clients daily, one-on-one while they are in the program to discuss their individual goals, steps they will take to achieve those goals, and any challenges they are facing. Coaching also happens in group settings where the coach engages the group in soft skills trainings, identification of skills and strengths, and other group activities.
Goal4 It! Jefferson County, Colorado
Goal4 It! is an evidence-informed, customer-centered framework for setting and achieving goals developed by Mathematica Policy Research. It was designed to be a replicable and sustainable coaching approach that can be used in a TANF, workforce, or other social service environment. Using the Goal4 It! approach, trained coaches help clients set meaningful goals, break goals down into manageable steps, develop specific plans to achieve the steps, and regularly review goal progress and revise their goals and/or plans. Coaches and case managers meet with clients who are not working at least once per month and meet with clients who are working at least once every two months. The first meeting is usually for one hour. Ongoing meetings are 30 or 45 minutes long. Each coach and case manager serves about 45 clients.
The two main criteria for selecting the coaching interventions for the evaluation were that: (1) an evaluation of the program would address ACF’s policy interests and inform the potential development of coaching interventions in the future; and (2) it was feasible to conduct a rigorous impact evaluation of the coaching intervention. To meet the first broad criterion, the program in which the intervention is embedded needed to serve a low-income population and focus on employment, and the coaching intervention should be robust and well implemented. To meet the second broad criterion, random assignment must have been feasible, the potential number of study participants must have been large enough to detect an impact expected from the intervention, and the program’s management and staff must have been supportive of an experimental evaluation.
The follow-up surveys, which are part of the overall impact study, will provide rigorous evidence on whether the coaching interventions are effective, for whom, and under what circumstances. The study is experimental. Participants eligible for the coaching services were asked to consent to participate in the study (Attachment A) and, if consent was given, were randomly assigned to two groups: a treatment group offered coaching and a control group not offered coaching. Individuals who did not consent to participate in the study were not eligible to receive coaching, were not randomly assigned, and will not participate in the data collection efforts. The control group may receive other services within the program. Both groups will remain eligible for other services offered in the community. For example, the control group may receive regular case management from staff who have not been trained in coaching. With this design, the research groups are likely to have similar characteristics, so differences in outcomes too large to be attributable to chance can be attributed to the coaching intervention. Under 0970-0506, we are collecting information at baseline (before or during random assignment) from study participants and staff and again at about 6 to 12 months after random assignment.
Universe of Data Collection Efforts
This request seeks approval of non-substantive changes to two previously approved implementation study data collection instruments to systematically capture descriptive information related to COVID-19. It also requests a slight increase to the incentive amount for the additional in-depth participant interviews, and to the estimated burden based on conducting additional interviews. Lastly, this request seeks approval to change the structure and amount of the incentive offered to respondents in the MyGoals sites who complete first and second follow-up surveys as part of the impact evaluation, and to make minor revisions to the survey instruments and notifications to reflect the requested changes.
Previously approved data collection efforts are listed below. Additionally, OMB previously approved the study consent form (Attachment A).
Impact Study
Baseline data collection (Attachment B)
First follow-up survey (Attachment C)
Notifications (Attachment I)
Second follow-up survey (Attachment N)
As part of the impact study, we request approval to change the structure and amount of the incentive offered to MyGoals respondents completing the first and second follow-up survey to match what is offered to respondents at the other four sites—a $50 incentive for completing each survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. We also request minor edits to the survey instruments (Attachment C. First follow-up survey_rev and Attachment N. Second follow-up survey_rev) and notifications (Attachment I. Notifications_rev) to reflect the changes. Additional information regarding the justification for this request is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
Implementation Study
Semi-structured management, staff, and supervisor interviews (Attachment D)
Staff survey (Attachment E)
In-depth participant interviews (Attachment F)
Staff reports of program service receipt (Attachment G)
Instructions for video recording coaching sessions (Attachment M)
As part of the implementation study, we propose conducting additional follow-up interviews with program staff and with participants to learn about how the programs have changed and how participant’s program engagement and needs have changed as a result of the COVID-19 pandemic. The previously approved interview guides have been revised to reframe some questions and add COVID-19 related questions; questions we do not intend to ask again have been deleted in order to keep the interview length the same as the previous interviews (Attachment D. Semi-structured management staff and supervisor interviews_rev and Attachment F. In-depth participant interviews_rev). We request approval to conduct these additional interviews in five of the six sites participating in the evaluation. Work Success has not continued serving participants during COVID-19, so we are not requesting approval to conduct additional interviews in that site. The justification for these additional interviews is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
A3. Improved Information Technology to Reduce Burden
This evaluation is using multiple applications of information technology to reduce burden. For example, the follow-up surveys are hosted on the Internet via a live secure web-link. To reduce burden, the surveys employ the following: (1) secure log-ins and passwords so that respondents can save and complete the survey in multiple sessions, (2) drop-down response categories so that respondents can quickly select from a list, (3) dynamic questions and automated skip patterns so that respondents only see those questions that apply to them (including those based on answers provided previously in the survey), and (4) logical rules for responses so that respondents’ answers are restricted to those intended by the question.
Respondents also have the option to complete the follow-up surveys using computer-assisted telephone interviewing (CATI). CATI reduces respondent burden, relative to interviewing via telephone without a computer, by automating skip logic and question adaptations and by eliminating delays caused when interviewers must determine the next question to ask. CATI is programmed to accept only valid responses based on preprogrammed checks for logical consistency across answers.
The additional interviews we propose conducting, to gather information related to COVID-19, will be conducted either by video or by phone, according to each respondent’s preference. The use of technology in this case is intended to both reduce burden on respondents and eliminate the need for in-person data collection due to restrictions related to COVID-19.
A4. Efforts to Identify Duplication
Information that is already available from alternative data sources will not be collected again for this evaluation. We will be collecting information related to employment and earnings both through administrative records and directly from study participants. This information is not duplicative because the two sources cover different types of employment. Information on quarterly earnings from jobs covered by unemployment insurance will be obtained from NDNH administrative records. The follow-up surveys ask for earnings across all jobs, including those not covered by unemployment insurance. A number of experimental employment evaluations have found large differences in survey- and administrative-based earnings impacts (Barnow and Greenberg 2015). Therefore, collecting information from both sources is necessary for a full understanding of impacts on earnings. To further identify and avoid duplication, we do not request baseline characteristic information in the second follow-up survey for participants that already provided this information in the first follow-up survey.
The descriptive information that this non-substantive change request proposes to collect, regarding program responses to COVID-19 and study participants’ experiences with the pandemic, is not available from any other data source. In preparation for the additional management, staff, and supervisor interviews, the study team will review the notes from the previously conducted interviews; any materials collected from the program about its response to COVID-19; and public information about the state’s and county’s infection rates, responses, and guidance. Review of these materials will inform the conduct of the interview and ensure respondents are not asked to provide information available from these other sources.
A5. Involvement of Small Organizations
The data collection does not involve small businesses or other small entities.
A6. Consequences of Less Frequent Data Collection
A first follow-up survey is available to participants approximately six to 12 months after random assignment. The second follow-up survey is available to participants about 21 to 24 months after random assignment. The second follow-up survey is collecting a similar set of outcome data as the first. This will allow an examination of whether the impacts of the program changed over time and whether changes in self-regulation skills were associated with changes in employment and self-sufficiency outcomes.
The additional implementation study data collection for which this non-substantive change request seeks approval will be a one-time data collection.
A7. Special Circumstances
There are no special circumstances for the proposed data collection efforts.
A8. Federal Register Notice and Consultation
Federal Register Notice and Comments
In accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13) and Office of Management and Budget (OMB) regulations at 5 CFR Part 1320 (60 FR 44978, August 29, 1995), ACF published notices in the Federal Register announcing the agency’s intention to request OMB reviews of information collection activities for this evaluation. Previous ICRs included information regarding these notices, and provided copies as attachments.
Experts in their respective fields from OPRE, Mathematica Policy Research, Abt Associates, and the University of Chicago listed below were consulted in developing the design, data collection plan, and materials for this evaluation.
OPRE
Hilary Bruck, Senior Social Science Research Analyst
Victoria Kabak, Social Science Research Analyst
Gabrielle Newell, Social Science Research Analyst
Mathematica Policy Research
Dr. Sheena McConnell, Project Director
Dr. Quinn Moore, Deputy Project Director
Dr. Michelle Derr, Principal Investigator
David DesRoches, Survey Director
Abt Associates
Dr. Alan Werner, Principal Investigator
Dr. Bethany Borland, Senior Analyst
University of Chicago
Dr. James Heckman, Measurement Expert
A9. Incentives for Respondents
In March 2018, the Office of Management and Budget’s Office of Information and Regulatory Affairs (OIRA) initially approved a two-tiered incentive structure with an “early bird” incentive that provides survey respondents $35 if they complete the survey within four weeks of the initial notification, and $25 if they complete after four weeks (OMB #0970-0506). We employed this incentive structure for participants in all six programs during the administration of both the first and second follow-up surveys until early spring 2020. In March 2020, OIRA approved a non-substantive change request proposing that the two-tiered incentive structure continue only among study participants in the two MyGoals sites in Baltimore and Houston, and that participants from the other four sites (FaDSS, LIFT, Jefferson County Colorado Works, and Work Success) be offered a $50 incentive for completing each survey, irrespective of whether the participants complete the survey within the four-week “early bird” period. We proposed this change due to patterns of survey response for those four sites showing a risk that our analysis would result in biased estimates of program impacts and would underrepresent participants in key analytic groups.
We are now requesting that the incentive offered to MyGoals respondents completing the impact evaluation’s follow-up surveys be changed from the “early bird” incentive structure approved in March 2018 to $50 to align with other study sites. The response rates for the follow-up surveys are at risk of being much lower than anticipated because in-person location has stopped due to COVID-19. We believe a higher survey incentive offered to MyGoals participants will help increase the response rates and decrease the program-control group response rate differential, to avoid bias in the estimates of the programs’ effectiveness. Additional information regarding the justification for the change in incentive amounts is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
In a previous ICR, OIRA had also approved offering respondents who participate in the in-depth interviews for the implementation study, which are estimated to take 2.5 hours on average, a $50 gift card. As part of this non-substantive change request, we are now proposing offering participants a $60 gift card to complete interviews related to receiving coaching services during COVID-19. We propose to offer $10 more for these interviews than those conducted earlier. It will be more difficult to recruit study participants who are still actively engaged in the coaching programs for these interviews, because fewer people are still participating in the programs. We believe the increased incentive amount will help us recruit sufficient numbers of people to be interviewed. A $60 incentive was recently approved for in-depth interviews for the Next Generation of Enhanced Employment Strategies Project (OMB #0970-0545). Additional information regarding the justification for this request is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
Background:
Estimates of program impacts may be biased if respondents differ substantially from non-respondents and those differences are correlated with assignment to the evaluation treatment or control groups. The risk of biased impact estimates increases with lower overall survey response rates or larger differences in survey response rates between the research groups (What Works Clearinghouse 2013). Thus, if low overall response rates or large differential response rates between the research groups are observed, differences between groups on key outcomes might be the result of differences in baseline characteristics among survey respondents and cannot be attributed solely to the effect of the coaching intervention (What Works Clearinghouse 2013).
Concern about the potential for low overall response rates are particularly relevant to this study. The longitudinal nature of the study adds to the complexity of the second follow-up survey data collection. Additionally, the coaching interventions are designed for unemployed low-income people. A number of factors could complicate tracking such participants over time. These factors include:
Unstable housing.
Less use of mortgages, leases, public utility accounts, cell phone contracts, credit reports, memberships in professional associations, licenses for specialized jobs, activity on social media, and appearances in publications such as newspapers or blogs.
Use of an alias to get utility accounts because of poor credit and prior payment issues.
Use of pay-as-you-go cell phones. These phone number are generally not tracked in online databases. Pay-as-you-go cell phone users also switch numbers frequently, which makes contacting them across a follow-up period more difficult.
Differential response rates between the treatment and control groups could bias this study’s impact estimates. Participants assigned to the control group may be less motivated to participate than those assigned to the treatment group because they are not receiving the intervention. They may also feel that the surveys are not relevant to them.
Evidence supporting use of incentives:
Methodological research on incentives. Evidence from prior studies shows that incentives can decrease the differential response rate between the treatment and control groups, and therefore reduce nonresponse bias on impact estimates (Singer and Kulka 2002; Singer et al. 1999; Singer and Ye 2013). For example, incentives are useful in compensating for lack of motivation to participate among control group members (Shettle and Mooney 1999; Groves et al. 2000). Incentives have also been found to induce participation among sample members for whom the topic is less salient, including members of the control group (Baumgartner and Rathbun 1997), a finding that also applies with hard-to-reach populations, similar to the target population of the current study (Martinez-Ebers 1997). Other experimental research on incentives concludes that incentives significantly increase response rates, reduce the average number of contacts required to achieve completed surveys, and reduce overall survey data collection costs (Westra et al. 2015).
Research evidence from similar studies. Evidence from an incentive experiment conducted as part of the Self-Employment Training (SET) Demonstration, approved by OMB (OMB #1205-0505), suggests that incentives are a successful strategy for improving response rates for low-income populations. This experiment assessed the effectiveness of three incentive approaches: (1) offering a standard incentive of $25; (2) offering a two-tiered incentive, with an incentive of $50 if respondents completed an 18-month follow-up survey within the first four weeks and $25 if respondents completed the survey after four weeks; or (3) no incentive.
Results from the SET incentive experiment suggest that incentives substantially reduce both overall nonresponse rates and differential response rates between the research groups. Among sample members offered an incentive, this experiment resulted in a 73 percent overall response rate for those in the two-tiered incentive group and a 64 percent response rate for those in the standard incentive group. The response rate for sample members who were not offered an incentive was 37 percent. The differential response rate between research groups for sample members offered an incentive was 12 percentage points for the two-tiered incentive group (79 percent for the treatment group versus 67 percent in the control group) and 6 percentage points for the standard incentive group (67 percent for the treatment group versus 61 percent in the control group). The differential response rate was substantially higher for the no incentive group at 36 percentage points (55 percent for the treatment group versus 19 percent in the control group).
Based on evidence from SET, we anticipate that without incentives, the survey response rate would be unacceptably low; it is likely to be less than 50 percent. Such response rates would put the study at severe risk of biased impact estimates.
Evidence supporting use of two-tiered incentives for the follow-up survey:
In addition to determining whether the study requires use of incentives, we must determine the structure that the incentives will take. As described in Section A.9 above, in March 2020 OIRA approved our request to change the incentive structure and amount for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites (OMB #0970-0506). In this request, we now propose discontinuing the OMB-approved two-tiered incentive approach still in place for the two MyGoals sites, and offering respondents from those sites who complete the first or second follow-up survey a $50 gift card irrespective of whether they complete the survey within the four-week “early bird” period (consistent with the change approved for the other study sites in March 2020). The justification for this change is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
Research evidence from similar studies. Two impact evaluations conducted incentive experiments that informed our proposed two-tiered incentive structure: SET and YouthBuild.
The results of the SET incentive experiment described above showed that relative to standard incentives, the two-tiered incentive led to somewhat higher overall response rates (73 versus 64 percent) but somewhat greater differential nonresponse rates between the research groups (12 versus 6 percentage points). Thus, findings related to response rate patterns do not strongly favor one incentive approach over the other.
However, the SET incentive experiment also concluded that two-tiered incentives led to shorter response times, lower average costs, and lower total fielding costs (including for the cost of the incentive payments). Specifically, the incentive experiment found that 98 percent of survey completes in the two-tiered incentive group came within four weeks of release, compared to 86 percent for the standard incentive group. Faster response time has implications for data quality because it ensures that the reference period for the one-year follow-up survey is as close to one year after study enrollment as possible. Faster response times also have important implications for data collection cost. In the SET incentive experiment, the average cost per complete was approximately 10 percent higher for the standard incentive group than for the two-tiered incentive group, despite the fact that the incentives offered under the two-tiered model were larger than those offered under the standard model.
Please note that the SET incentive experiment cannot disentangle which aspect of the two-tiered incentive structure—two tiers or higher overall incentive amount—led to higher overall response rates, faster response times and lower overall costs. Thus, we do not know what the response and cost patterns would have been with a two-tiered structure that used a lower initial incentive amount. The proposed initial incentive amount for this study ($35 for response within the first four weeks) is lower than the one used in SET ($50 for response within the first four weeks). The final incentive amount is the same ($25 for response after four weeks). The current request of $50 for completion of both the first and second follow-up survey for two sites aligns with the SET early response incentive, and reflects that our experience conducting these surveys among this population requires longer field periods in order to locate nonrespondents.
The YouthBuild evaluation (OMB #1205-0503), sponsored by the Department of Labor, also incorporated an incentive experiment. This experiment assessed the effectiveness of two incentive approaches: (1) offering a standard incentive of $25; or (2) offering a two-tiered incentive, with an incentive of $40 if respondents completed a 12-month follow-up survey within the first four weeks and $25 if respondents completed the survey after four weeks.
Results from the YouthBuild incentive experiment are consistent with those of the SET incentive experiment in terms of effects on response rate, response time and cost. The tiered incentive structure slightly increased the overall response rate; sample members in the two-tiered incentive group had an overall response rate of 72 percent compared to 68 percent for the standard incentive group. We do not have data from the YouthBuild incentive experiment on the effect of incentive structure on differential response rates between the research groups.
YouthBuild sample members in the two-tiered incentive group were 38 percent more likely to respond to the survey within four weeks than those assigned to receive a standard incentive. As a result sample members in the two-tiered incentive group were less likely to be subject to more labor intensive and costly data collection efforts such as contacts from telephone interviewers, extensive in-house locating, or ultimately field locating. Results from the YouthBuild incentive experiment indicate that final data collection cost estimates were approximately 17 percent lower with two-tiered incentives than with standard incentives, despite the fact that the incentives offered under the two-tiered model were larger than those offered under the standard model. As with the SET incentive experiment, we cannot disentangle which aspect of the two-tiered incentive structure (incentive value or incentive structure) is responsible for the reported effects of the incentive.
The two-tiered incentive structure for the first and second follow-up surveys was approved and implemented across all sites for the first and second follow-up surveys until early spring 2020. In March 2020, OIRA approved our request to change this structure and amount for the FaDSS, LIFT, Jefferson County Colorado Works, and Work Success sites due to significantly low survey response rates that risk our analysis resulting in biased estimates of program impacts and underrepresentation of participants in key analytic groups. We are now proposing to offer sample members from MyGoals sites the same $50 incentive for completing each follow-up survey. The justification for requesting this change is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020, including details of the early results of the change to the $50 incentive for the four sites approved in March 2020.
Table A.1 below presents findings from the incentive experiments described above.
Table A.1 Incentive type and response rates obtained in similar studies with incentive experiments
Study |
Instrument |
Duration (minutes) |
Response Rate
|
Self-Employment Training Demonstration, Incentive experiment sample OMB control #1205-0505 |
18 month follow-up |
20 |
Two-tiered incentive ($50 first four weeks, $25 after four weeks):
Standard incentive ($25):
No incentive:
|
YouthBuild, Incentive experiment sample OMB control #1205-0503 |
12-month follow-up |
60 |
Two-tiered incentive ($40 first four weeks, $15 after four weeks):
Standard incentive ($25):
|
Note: Response rates separate by research group are not available for the YouthBuild incentive experiment.
Incentive for the in-depth interview:
In a previous ICR, OIRA approved giving respondents who participate in the in-depth interviews, which are estimated to take 2.5 hours on average, a $50 gift card. This incentive was modeled on another ACF study entitled Parents and Children Together (PACT). Respondents (who were low-income couples and fathers) received a $60 gift card for an in-depth interview (OMB #0970-0403). The PACT study observed overall response rates of 88 and 72 percent for their healthy marriage and responsible fatherhood programs, respectively. As with the current study, PACT targeted low-income populations; thus respondents had similar demands on their time and constraints as the target population in this study. Incentives can make it easier for respondents to participate in the in-depth interviews by helping offset costs of transportation, child care, and cell phone data and minute plans. The first round of in-depth interviews took place in person during scheduled visits to the coaching programs. Because the timing of the in-depth interviews cannot vary, a two-tiered structure was not considered for this incentive.
This non-substantive change request seeks approval to conduct a second round of in-depth participant interviews, to collect information regarding study participants’ experience with the COVID-19 pandemic. These interviews will take place by phone or video, depending on participant preference. As described in Section A.9, we propose to offer participants a $60 gift card to complete these interviews—$10 more than what was offered for the first-round interviews. It will be more difficult to recruit study participants who are still actively engaged in the coaching programs for these interviews, because fewer people are still participating in the programs. We believe the increased incentive amount will help us recruit sufficient numbers of people to be interviewed. As noted above, a $60 incentive was approved for the PACT study’s in-depth interviews (OMB #0970-0403); a $60 incentive was also recently approved for in-depth interviews for the Next Generation of Enhanced Employment Strategies Project (OMB #0970-0545). Additional information regarding the justification for the change in incentive amount is included in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
Response rates for similar studies:
Table A.2 presents the type of data collection, incentive offered, and response rates obtained for similar studies cited in this section. Table A.2 includes information on the SET and YouthBuild studies. Information on these studies in Table A.1, discussed above, relates to results from the incentive experiment, conducted on early cohorts of sample released for data collection. Based on results of these experiments, the SET and YouthBuild studies both implemented two-tiered incentives study wide. Table A.2 presents results for the full data collection, before and after the conclusion of the incentive experiments.
Table A.2 Incentives and response rates obtained in similar studies
Study |
Instrument |
Duration (minutes) |
Incentive Amount |
Response Rate
|
Self-Employment Training Demonstration, Full sample OMB control #1205-0505 |
18 month follow-up |
20 |
$50 first four weeks $25 after four weeks |
80 percent overall 83 percent treatment 78 percent control |
YouthBuild Full sample OMB control #1205-0503 |
12 month follow-up |
60 |
$40 first four weeks $25 after four weeks |
81 percent overall 82 percent treatment 79 percent control |
Parents and Children Together OMB control #0970-0403 |
In-depth interview of treatment group members |
120 |
$60 |
88 percent healthy marriage overall 72 percent responsible fatherhood overall |
Note: Treatment and control groups in this table refer to the overall evaluation (that is, the original conditions to which sample members were assigned upon enrollment) and not the incentive experiment. The SET and YouthBuild samples include the survey sample, including the time before and after the conclusion of the incentive experiments described in Table A.1.
A10. Privacy of Respondents
Information collected will be kept private to the extent permitted by law. As part of the consent process (Attachment A), respondents were informed of all planned uses of data, that their participation is voluntary, and that their information will be kept private to the extent permitted by law. Due to the sensitive nature of this research (see A11 for more information), the evaluation obtained a Certificate of Confidentiality. The Certificate of Confidentiality helps assure participants that their information will be kept private to the fullest extent permitted by law.
As specified in the contract, Mathematica and Abt will protect respondent privacy to the extent permitted by law and will comply with all Federal and departmental regulations for private information. Mathematica has developed a Data Safety and Monitoring Plan that assesses all protections of respondents’ personally identifiable information (PII). Mathematica and Abt will ensure that all of its employees, subcontractors (at all tiers), and employees of each subcontractor who perform work under this contract/subcontract are trained on data privacy issues and comply with the above requirements. All study staff with access to PII will receive study-specific training on (1) limitations on disclosure; (2) safeguarding the physical work environment; and (3) storing, transmitting, and destroying data securely. These procedures will be documented in training manuals. Refresher training will occur annually.
As specified in the evaluator’s contract, Mathematica and Abt will use Federal Information Processing Standard compliant encryption (Security Requirements for Cryptographic Module, as amended) to protect all instances of sensitive information during storage and transmission. Mathematica and Abt will securely generate and manage encryption keys to prevent unauthorized decryption of information, in accordance with the Federal Information Processing Standard. Mathematica and Abt will ensure that they incorporate this standard into their property management/control system, and establish a procedure to account for all laptop computers, desktop computers, and other mobile devices and portable media that store or process sensitive information. Any data stored electronically will be secured in accordance with the most current National Institute of Standards and Technology requirements and other applicable Federal and departmental regulations.
Information will not be maintained in a paper or electronic system from which they are actually or directly retrieved by an individuals’ personal identifier.
A11. Sensitive Questions
Some sensitive questions are necessary in an evaluation of programs designed to affect employment. Before starting the baseline and follow-up surveys and the in-depth interviews, all respondents are and will be informed that their identities will be kept private and that they do not have to answer any question that makes them uncomfortable. Although such questions may be sensitive for many respondents, they have been successfully asked of similar respondents in other data collection efforts, such as in the follow-up surveys and first round of management, staff, and supervisor interviews and participant in-depth interviews already conducted for the Evaluation of Employment Coaching for TANF and Related Populations (OMB #0970-0506), Parents and Children Together (OMB #0970-0403), and the Workforce Investment Act Gold Standard Evaluation (OMB #1205-0504).
Specific to this non-substantive change, some sensitive questions are necessary in order to examine the effects of COVID-19. The sensitive questions relevant for this non-substantive change request include:
Wage rates and earnings. It is necessary to ask about earnings because increasing participants’ earnings is a key goal of coaching interventions. The second follow-up survey asks about each job worked since random assignment, the wage rate, and the number of hours worked per week.
Challenges to employment. It is important to ask about challenges to employment due to COVID-19, such as whether the respondent lost a job or was furloughed, or has concerns about returning to work, to understand how programs might be supporting participants and their potentially unmet needs.
Convictions. Prior involvement in the criminal justice system makes it harder to find employment. Hence, it is important to ask about convictions that occurred before random assignment as baseline information (if participants did not already provide this information in the first follow-up) and convictions that occurred after random assignment or since the first follow-up survey as an outcome that may be affected by coaching.
Economic hardships. The follow-up survey asks about economic hardships, such as missing meals or needing to borrow money from friends. These outcomes reflect a lack of self-sufficiency and may be affected by coaching. The revised in-depth participant interview guide includes questions about economic hardships due to COVID-19, such as financial stress and receipt of public benefits. It is important to ask about these issues to understand program participants’ potentially unmet needs and how programs might address them.
Health. The revised in-depth participant interview guide includes questions about challenges related to personal and family well-being, stress, and mental and physical health. Health factors—both related and unrelated to COVID-19—could play a major role in participants’ engagement in the coaching program and ability to obtain or maintain employment. The revised management, staff, and supervisor interview guide includes questions about challenges staff have personally faced implementing coaching during the pandemic, such as those related to stress, wellness, and health. It is important to ask about these issues to understand how such challenges may have influenced the implementation of the program or delivery of services to participants.
A12. Estimation of Information Collection Burden
Previously Approved Information Collections
Total Burden Previously Approved
As of the last change to 0970-0506, which was a non-substantive change request approved by OIRA in March 2020, 6,910 annual burden hours were approved. This includes burden for data collection at six sites and covers the following information collections:
Baseline data collection (Attachment B)
First follow-up survey (Attachment C)
Semi-structured management, staff, and supervisor interviews (Attachment D)
Staff survey (Attachment E)
In-depth participant interviews (Attachment F)
Staff reports of program service receipt (Attachment G)
Video recordings of coaching sessions (Attachment M)
Second follow-up survey (Attachment N)
Table A.3 presents the 6,910 annual burden hours remaining at the time of this request from the previously approved information collections. Under the current non-substantive change request, we propose to interview 37 staff respondents and 10 study participants about the effects of COVID-19. Due to remaining burden for the previously approved associated instruments, conducting these interviews will only require adding burden for eight additional staff respondents and five additional participant respondents (9.5 additional burden hours). Additional information is provided in Attachment P. Nonsub change request_Coaching Evaluation_Oct 2020.
Table A.3 Burden remaining from previously approved information collections
Instrument |
Total number of respondents remaining |
Annual number of respondents remaining |
Number of responses Per respondent |
Average burden hours per response |
Annual burden hours |
Average Hourly Wage |
Total Annual Cost |
Baseline data collection – study participants |
5,931 |
1,977 |
1 |
0.33 |
653 |
$7.25 |
$4,734.25 |
Baseline data collection – staff |
60 |
20 |
99 |
0.33 |
653 |
$33.38 |
$21,797.14 |
First follow-up survey |
4,544 |
1,515 |
1 |
0.75 |
1,136 |
$7.25 |
$8,236 |
Semi-structured management, staff, and supervisor interviews |
140 |
47 |
1 |
1.5 |
70.5 |
$33.38 |
$2,353.29 |
Staff survey |
96 |
32 |
1 |
0.75 |
24 |
$33.38 |
$801.12 |
In-depth participant interviews |
53 |
18 |
1 |
2.5 |
45 |
$7.25 |
$326.25 |
Staff reports of program service receipt |
60 |
20 |
5,200 |
0.03 |
3,120 |
$33.38 |
$104,145.60 |
Video recordings of coaching sessions |
54 |
18 |
10 |
0.1 |
18 |
$33.38 |
$600.84 |
Second follow-up survey |
4,800 |
1,600 |
1 |
0.75 |
1,200 |
$7.25 |
$8,700 |
Estimated annual burden total |
6,919.5 |
|
$151,694.49 |
The total annual burden, including previously approved and remaining hours (6,910) in addition to this new request (9.5) is 6,919.5 annual hours.
A13. Cost Burden to Respondents or Record Keepers
There are no additional costs to respondents or record keepers.
A14. Estimate of Cost to the Federal Government
The total cost for the COVID-19 related implementation data collection under this current request will be $60,700. Annual costs to the Federal Government will be $20,233.
A15. Change in Burden
This non-substantive change request increases the annual burden hours associated with the semi-structured management, staff, and supervisor interviews by 4.5 hours, and the annual burden hours associated with the in-depth participant interviews by 5 hours. This increased burden is reflected in Table A.3.
A16. Plan and Time Schedule for Information Collection, Tabulation and Publication Plans for Tabulation
The impact analysis will estimate the effectiveness of each coaching intervention in the evaluation. The goal of the impact analysis is to compare observed outcomes for study participants who were offered the coaching intervention with outcomes for members of a control group who were not offered coaching. We will use the experience of the control group as a measure of what would have happened to the treatment group participants in the absence of the intervention. Random assignment makes it likely that the two groups of study participants do not initially differ in any systematic way on any characteristic. Any observed differences in outcomes between the treatment and control group members can therefore be attributed to the intervention.
We will use the baseline data collected under 0970-0506 to describe the study participants in each coaching intervention. We will use t-tests to assess whether random assignment successfully generated treatment and control groups with similar baseline characteristics, and that survey respondents in the two groups are similar.
Differences in means or proportions of follow-up outcomes between the treatment and control group will provide unbiased estimates of the impacts of the intervention. More precise estimates will be obtained using regression models to control for random differences in the baseline characteristics of treatment and control group members. In their simplest forms, these models can be expressed by the following equation: , where is an outcome for person (such as earnings); is a constant; is a vector of baseline characteristics (such as gender, age, race/ethnicity); is a vector representing the relationship between each baseline characteristic and the outcome; is an indicator for whether person received treatment; and is an error term. represents the estimated impact of the intervention. We will estimate these models separately for each coaching intervention.
If the sample is large enough, we will conduct a subgroup analysis to examine who benefits most from the intervention. We will estimate subgroup effects using the following equation: , where is an indicator for whether person is part of a subgroup; represents the relationship between subgroup status and the outcome; represents the additional effect of treatment for those in the subgroup. We will consider subgroups that are appropriate for the intervention’s target population, such as those defined by work readiness, employment challenges, or TANF history.
Time Schedule and Publication
Study enrollment and baseline data collection began in summer 2018 under a previous ICR approved by OMB (OMB #0970-0506). Over the duration of the evaluation, a series of reports will be generated, the timing for which is highlighted in Table A.5. Two reports will be produced on the impact findings, based on the first and second follow-up surveys, respectively. Reports on the implementation study include a detailed report describing each program and a report examining the implementation findings across all six programs (a cross-site implementation study report). In addition to these reports, this evaluation may provide opportunities for analyzing and disseminating additional information through special topics reports and research or issue briefs. We will also provide a public or restricted-use data file for others to replicate and extend our analyses. Findings from this non-substantive change request to systematically capture descriptive information related to COVID-19 will be incorporated into the evaluation’s implementation reports and impact analyses and reporting.
Table A.5. Study schedule
Activity |
Timing* |
Data collection |
|
Sample enrollment and baseline data collection |
Spring 2018 through Fall 2019 for FaDSS; Summer 2018 through Fall 2019 for LIFT and Goal4 It!; Spring 2019 through Spring 2020 for Work Success; Not applicable for the two MyGoals sites |
Implementation study data collection |
Summer 2018 through Fall 2020 |
First follow-up survey |
Spring 2018 through Spring 2021 |
Second follow-up survey |
Spring 2019 through Summer 2022 |
Reporting |
|
Implementation study reports |
Fall 2020 |
First follow-up findings report |
Fall 2021 |
Second follow-up findings report |
Fall 2022 |
Special topics reports |
To be determined |
*All dates dependent on date of OMB approval of this information collection request.
A17. Reasons Not to Display OMB Expiration Date
All instruments will display the expiration date for OMB approval.
A18. Exceptions to Certification for Paperwork Reduction Act Submissions
No exceptions are necessary for this information collection.
References
Annie E. Casey Foundation. “Financial Coaching: A New Approach for Asset Building?” Baltimore, MD: Annie E. Casey Foundation, 2007. Retrieved from www.aecf.org.
Barnow, B. S., and D. Greenberg. “Do Estimated Impacts on Earnings Depend on the Source of
the Data Used to Measure Them? Evidence from Previous Social Experiments.” Evaluation
Review, vol. 39, no. 2, April 2015.
Baumgartner, R., and P. Rathbun. “Prepaid Monetary Incentives and Mail Survey Response Rates.” Paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, VA, 1997.
Blair, C., and C. Raver. “Improving Young Adults’ Odds of Successfully Navigating Work and Parenting: Implications of the Science of Self-Regulation for Dual-Generation Programs.” Draft report submitted to Jack Shonkoff, Center on the Developing Child, Harvard University, January 2015.
Bond, Gary R., S. J. Kim, D. R. Becker, S. J. Swanson, R. E. Drake, I. M. Krzos, V.V. Fraser, S. O'Neill, and R. L. Frounfelker. "A Controlled Trial of Supported Employment for People with Severe Mental Illness and Justice Involvement." Psychiatric Services, vol. 66, no. 10, 2015.
Davis, Lori L., A.C. Leon, R. Toscano, C.E. Drebing, L.C. Ward, P. E Parker, T.M. Kashner, and R.E Drake, 2012. "A randomized controlled trial of supported employment among veterans with posttraumatic stress disorder." Psychiatric Services, vol. 63, no. 5, May 2012, pp. 464-470.
Groves, R.M., E. Singer, and A.D. Corning. “A Leverage-Saliency Theory of Survey Participation: Description and Illustration.” Public Opinion Quarterly, vol. 64, 2000, pp. 299–308.
Hamilton, Gayle. “Improving Employment and Earnings for TANF Recipients.” Washington, DC: The Urban Institute, Office of Planning, Research, and Evaluation, March 2012.
Lambert EY, Wiebel WW (Eds): The Collection and Interpretation of Data from
Hidden Populations. Washington, DC: United States National Institute on
Drug Abuse; 1990.
Martinez-Ebers, V. “Using Monetary Incentives with Hard-to-Reach Populations in Panel Surveys.” International Journal of Public Opinion Research, vol. 9, 1997, pp. 77–86.
Mullainathan, S., and E. Shafir. Scarcity: Why Having Too Little Means So Much. New York: Henry Holt & Company, 2013.
National Research Council. 2001. Studies of welfare populations: Data collection and research issues. Washington, DC. The National Academies Press.
Office of Information and Regulatory Affairs. “Questions and Answers When Designing Surveys for Information Collections.” Office of Management and Budget, October 2016.
Pavetti, LaDonna. “Using Executive Function and Related Principles to Improve the Design and Delivery of Assistance Programs for Disadvantaged Families.” Washington, DC: Center on Budget and Policy Priorities, May 2014.
Ruiz de Luzuriaga, Nicki. Coaching for Economic Mobility. Boston, MA: Crittenton Women’s Union, 2015.
Shettle, C, and G. Mooney. “Monetary Incentives in Government Surveys.” Journal of Official Statistics, vol. 15, 1999, pp, 231–250.
Singer, E. and R. Kulka. 2002. "Paying Respondents for Survey Participation," In Studies of
Welfare Populations: Data Collection and Research Issues, eds. Michele Ver Ploeg, Robert A.
Moffitt, and Constance F. Citro, pp. 105-128. Washington: National Academy Press.
Singer E, J. Van Hoewyk, N. Gebler, T. Raghunathan, and K. McGonagle. “The effect of incentives on response rates in interviewer-mediated surveys.” Journal of Official Statistics, vol. 15(2), 1999, 217–230.
Singer, Eleanor, and C. Ye. 2013. "The Use and Effects of Incentives in Surveys." Annals of the American Academy of Political and Social Science, 645(1): 112-141.
Theodos, Brett, Margaret Simms, Mark Treskon, Christina Stacy, Rachel Brash, Dina Emam, Rebecca Daniels, and Juan Collazos. "An Evaluation of the Impacts and Implementation Approaches of Financial Coaching Programs." October. Urban Institute. www. urban.org/sites/default/files/alfresco/publication-pdfs/2000448-An-Evaluation-of-the-Impacts-and-Implementation-Approaches-of-Financial-Coaching-Programs. pdf (2015).
What Works Clearinghouse. “Assessing Attrition Bias.” Available at: https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_attrition_v2.1.pdf. 2013.
Wittenburg, D., D. Mann, and A. Thompkins. "The Disability System and Programs to Promote Employment for People with Disabilities." IZA Journal of Labor Policy, vol. 2, no. 4, 2013. doi:10.1186/2193-9004-2-4.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OPRE OMB Clearance Manual |
Author | DHHS |
File Modified | 0000-00-00 |
File Created | 2021-01-13 |