Assessing Options to Evaluate Long-Term Outcomes Using Administrative Data: Identifying Targets of Opportunity
OMB Information Collection Request
0970 - 0356
Supporting Statement
Part B
January 2018
(Updated July 2018)
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officer:
Brett Brown
The proposed data collection effort will fill in gaps in information about employment and youth/child development evaluations and help assess the feasibility of matching those evaluations to administrative data sources for analyses of long-term outcomes.
B1. Respondent Universe and Sampling Methods
The study team is currently narrowing down the full list of evaluations to a subset of 16 to 25 “major evaluations”.1 Once the major evaluations are identified, the study team will begin collecting the information outlined in the evaluation template for each of the evaluations. The study team will first add any publicly available information into the template, and then reach out to the original evaluation research teams and other relevant individuals to help fill in any remaining information. This is the only data collection activity being requested under this generic clearance.
The universe of respondents who will be contacted to help complete the evaluation templates includes:
Project directors, principal investigators, and other members of the original evaluation research teams of the major evaluations selected.
Data archiving staff and evaluation contractors familiar with the major evaluations selected.
We do not plan to select a sample among this set of respondents and will reach out to one key informant per evaluation.
B2. Procedures for Collection of Information
To gather information for the evaluation template, a member of the study team will reach out to a member of the evaluation research team or another individual who may have insider knowledge of the major evaluations by email. We plan to contact one key informant for each evaluation as a starting point. We anticipate these key informants may put us in touch with other individuals who are more familiar with the evaluation or parts of the evaluation.
The study team will use an email script that explains the project and its goals, and what specifically we are asking individuals to help us with (see Appendix B). A copy of the evaluation template (see Appendix A) will also be included in the email.
We will make the data collection process as streamlined and as easy to comply with as possible. We will pre-populate information that we already gathered before the templates are distributed, so the individuals can just review and edit those answers quickly.
B3. Methods to Maximize Response Rates and Deal with Nonresponse
Expected Response Rates
Based on similar sorts of data collection efforts (such as a recent survey of staff at community-based organizations in Chicago as part of the New Communities Project), we anticipate a response rate of around 80 percent. Our first round of data collection yielded a response rate of 81.2 percent.
Dealing with Nonresponse
Nonresponse bias is not as serious an issue in this study compared to a causal inference study since the study is purely descriptive. Still, we want to minimize nonresponse so that we do not have to rule out the top tier studies for longer term study. Therefore, we will use several strategies to increase response rates (described below).
Maximizing Response Rates
We plan to use several strategies to maximize response rates:
If individuals do not respond to the initial emails that are sent, we will follow-up with them by phone.
ACF staff that are involved with the project will help the study team identify individuals to contact for evaluations that were funded by ACF. The study team will draw on ACF’s pre-existing relationships with these individuals when reaching out.
The study team also has numerous relationships with individuals at the federal government and other research organizations. We will ask our current contacts to help identify and put us in touch with other individuals who may be familiar with the major evaluations.
The study team will take advantage of conferences – such as the Association of Public Policy Analysis and Management, American Public Human Services Association, and National Association of Welfare Research Statistics conferences – to try to speak to contractors and project directors.
B4. Tests of Procedures or Methods to be Undertaken
The data collection effort for the evaluation template was pre-tested with fewer than 10 people. The template was refined after this pre-test. We do not think further pre-testing is feasible in the context of this study because of the small sample size and limited timeframe for data collection.
B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
The following people from the study team will be responsible for collecting and/or analyzing data:
Dr. Richard Hendra (212) 340-8623
Ms. Alexandra Pennington (212) 340-8847
Ms. Kelsey Schaberg (212) 340-7581
1Target evaluations for data collection consist of studies that were completed in 1995 or later (including ongoing studies), and have a compelling theoretical or empirical basis for a long-term follow-up analysis. The theoretical basis upon which these evaluations are being selected includes the following key set of factors: study design, strength of evidence, study quality, and treatment contrast.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OPRE OMB Clearance Manual |
Author | DHHS |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |