SUPPORTING STATEMENT FOR THE PAPERWORK REDUCTION ACT OF 1995
SUBMISSION FOR DATA COLLECTION FOR THE EVALUATION OF
THE YOUNG PARENTS DEMONSTRATION
PART B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
1. Respondent Universe and Sampling Methods
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
We will attempt to gather data on the entire universe of the target population in this study, with the goal of being able to generalize to this population. Therefore, we will follow rigorous procedures to obtain high response rates and assess potential nonresponse bias in our achieved completed sample to help inform our assessment of how well we have represented the universe and how reliable the results from our achieved sample are with respect to the universe. The universe for the implementation site visits is the four organizations awarded grants for the Young Parents Demonstration Program (YPDP). The universe for the participant tracking system is the 1633 applicants eligible for YPDP services at those grantees. The universe for the telephone follow-up surveys is the same 1633 individuals entered into the PTS. No sampling methods are being used. The universe for the final analysis is the 1306 respondents to the follow-up surveys.
The universe for the implementation site visits is the four organizations awarded grants for the Young Parents Demonstration Program (YPDP). The universe for the participant tracking system is the 1633 applicants eligible for YPDP services at those grantees. The universe for the telephone follow-up surveys is the same 1633 individuals entered into the PTS. No sampling methods are being used. The universe for the final analysis is the 1306 respondents to the follow-up surveys.
|
Number in Potential Universe |
Estimated Number in Final Universe |
Percent |
Grantees |
4 |
4 |
100% |
Eligible Applicants |
1633 |
1633 |
100% |
PTS -- Randomly Assigned |
1633 |
1633 |
100% |
18 and 36-month Follow-Up Surveys |
1633 |
1306 |
80% |
2. Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
a. Statistical methodology for stratification and sample selection
No sampling or stratification is being used in any of the three data collection activities that are the focus of this submission.
b. Estimation procedures
Because the evaluation includes the universe of grantees (i.e., during site visits) and survey respondents, estimation procedures will be straightforward regressions and t-tests of the difference in mean outcomes between the treatment and control groups. The primary statistical approach that will be used for the impact analysis is regression analysis.1 Regression analysis allows us to estimate the impact of the treatment (i.e., mentoring services provided by YPD grantees) on the outcomes of interest while holding constant all other relevant observed variables. In mathematical terms, regression analysis permits us to estimate the equation
Yi = β 0+ β 1X1i + β 2X2i + β 3X3i …..+ β zZi + εi
where Yi is the outcome of interest for person i (for example, post-program employment or earnings); the X variables represent personal characteristics thought to potentially influence the outcome, such as gender, race/ethnicity, and education, the β terms (the regression coefficients) indicate the effect that the explanatory variables have on the outcome, and εi is a random error term with mean of zero for observation i. The variable of primary interest is Z, which is a dichotomous variable set equal to 1 for those who participate in the program (i.e., treatment group members) and 0 for others (i.e., control group members).
The main focus of the analysis effort will be on determining net impacts of the treatment on employment and earnings of YPD participants at the individual site level, and if appropriate, for the pooled sample across the four YPD sites. The treatment being provided under the experimental design (i.e., mentoring) is incremental – that is, an additional service that is being provided on top of existing employment and training services being provided to the control group. Both the treatment and control groups are receiving the existing employment and training services under the demonstration.
In addition to the employment and earnings outcomes that are the central focus of the impact study, there are also additional outcomes that are being collected as part of the survey and PTS that will be explored (e.g., education outcomes and parenting outcomes). To the extent possible, exploratory analyses will be conducted on these additional outcome variables using the methods described above (though employment and earnings will be the key outcomes of interest). To the extent feasible, subgroup analyses will be conducted using participant characteristics collected at the time of random assignment. The ability to conduct such subgroup analyses (for example, analyzing net impacts of the intervention on earnings for females versus males) will be dependent upon ability to pool sample across the four sites (i.e., with sample sizes of 400 for individual sites, it is not anticipated that it will be possible to conduct subgroup analyses and obtain statistically significant effects at the individual site level, particularly because of the incremental nature of the intervention).
c. Degree of accuracy needed for the purpose described in the justification
Although inferences will be made to the respondent population only, it’s useful to consider the statistical power of the estimates as if they were based on sampling. The concept of minimum detectable effect (MDE) is a practical way to summarize the statistical power of a particular evaluation design.2 Orr (1999) describes the MDE as “the smallest true impact that would be found to be statistically significantly different from zero at a specified level of significance with specified power.” For a binary outcome, such as employment in the post-program period, the formula for the MDE is:
MDE = Z(π*(1-π)).5((1-R2)/(nP(1-P))).5, where
Z = a multiplier which converts the standard error of an impact estimator to its corresponding minimum detectable effect,
π = the proportion of the study population with a successful outcome,
R2 = the explanatory power of the impact regression,
P = the proportion of sample members randomly assigned to the program group, and
n = the total number of sample members.
The formula is similar for a continuous outcome such as earnings:
MDE = Zσ((1-R2)/((nP(1-P)).5, where σ is the standard deviation of the outcome (e.g., earnings)
For the YPDP evaluation, if we pool all four sites, we assume that we will have 1,306 observations (80% of 1,633 total possible observations), or 320 observations (80% of 400 total possible) for individual sites.3 Additionally, we assume that we want to calculate the minimum detectable effect for a two-sided test with 80 percent power and a .05 significance level. Further, we compute the MDE for earnings, a continuous variable, and employment, a dichotomous variable. We assume a standard deviation for earnings of $4,899 based on data from the National Job Corps evaluation. For employment, we conservatively estimate that the mean outcome is .50. For earnings, we further assume that the R2 for the regression of earnings on individual characteristics is .20, which is consistent with the estimates from earnings regressions from the National Job Corps study. Finally, we assume that 50 percent of the sample is assigned to the treatment group. The table below shows the MDE under these assumptions:
|
Sample Size (adjusted for non-response) |
Earnings |
Employment |
Per Site |
320 |
$1,372 |
.16 |
Pooled sites |
1,306 |
$679 |
.08 |
For the pooled analyses, the minimum detectable effects are small enough to provide statistically significant impact estimates if the programs have a reasonable impact on earnings and employment.
d. Unusual problems requiring specialized sampling procedures
There are no unusual problems requiring specialized sampling procedures. The three data collection efforts will collect data on the universe of grantees (i.e., site visits) or YPDP participants (i.e., PTS and follow-up surveys).
Any use of periodic (less frequent than annual) data collection cycles to
reduce burden
The implementation/process study site visits will occur one-time only, approximately one year after each YPDP project begins random assignment. Staff and administrators in each of the four grantee programs will enter data into the PTS beginning at the start of random assignment and continuing for approximately 18 months. All YPDP sites will enter initial information needed to conduct the random assignment, and grantees will use the PTS for ongoing program data entry to record service receipt and employment outcomes at 6, 12, and 18 months after random assignment. One purpose of the PTS is to reduce the burden of executing the grant provisions, including random assignment and service documentation, by providing grantees with a Web-based electronic participant information system they can use, particularly for those without a pre-existing management information system. The 18-month and 36- month telephone follow-up surveys will be administered only once.
3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.
All four YPDP sites will have to agree to fully participate in the evaluation; such agreement ensures 100-percent response rate for the field-based implementation/process interviews during the site visits. Likewise, all grantees have agreed to enter participant data into the PTS to facilitate random assignment of all YPDP participants. Care has been taken, and the approach approved by the Urban Institute IRB, to accurately and simply explain the study to potential participants, which should minimize refusal rates and maximize voluntary participation in the YPDP. However, those refusing will not be included in the study universe.
Techniques to Maximize Response Rate to Follow-up Survey To achieve a high response rate on the follow-up telephone survey, the evaluation contractor anticipates the need for multiple modes to contact respondents for the survey: telephone, mail, and in-person. In addition, participants will receive an incentive payment of $15 if they complete the survey. The contract has developed a strategy that will maximize telephone interviews, minimize field data collection costs, and avoid issues that can arise with mixed-mode data collection. An overview of the procedures that will be followed in conducting the follow-up survey is listed below.
Collect contact information as part of the enrollment process for each YPDP participant. At the time of intake into YPDP, participants will be asked to provide contact information (i.e., telephone, e-mail, and address) for themselves, as well as three additional contacts. This information will be entered into the PTS. YPDP grantee staff will be able to update this contact information at any time.
Mail an advance pre-notification letter to the updated or last known address for the study subjects. The advance pre-notification letter for the follow-up survey will remind the individual about the YPDP study and provide background information about the survey effort. This pre-notification letter will also make YPDP participants aware of the $15 incentive payment for completing the survey. Envelopes will be marked “Address Service Requested,” which will ensure that the updated address will be forwarded while still notifying the sender, Abt Associates, Inc., of the updated address. When letters are returned as undeliverable, Abt Associates staff will contact YPDP grantees to determine if there has been a recent change of address, which has not been recorded in the PTS, and/or to obtain additional guidance from grantee staff on the best possible way to contact the individual.
Commence up to 15 calls per sample individual and conduct telephone interviews using CATI. Survey staff will initiate telephone calls to YPDP participants during the week of the 18-month and 36-month anniversaries of the participant’s random assignment. Both treatment and control group members will be surveyed by telephone. Utilizing the most recent contact information available, the survey team will commence telephone data collection using a 15-calls per each participant calling design. Despite having recent contact information in the PTS, it is anticipated that it will be necessary to locate some respondents who may have moved or changed phones since their last contact. If the most recent telephone number in the PTS is incorrect or the survey team cannot reach the individual through the contact information provided, a team member will contact the program for any updates that might not have been entered into the PTS and other valuable information that YPDP staff might have on locating the participant. The survey team will also call the alternative contacts listed in the PTS (which the participant provided as part of the intake process). As each participant is successfully contacted, the survey will be conducted using Computer Assisted Telephone Interviewing (CATI), which collects responses 100 percent electronically.
In-Person Survey Recruitment and Administration. The survey team will conduct in-person field follow-up in the YPDP grantee sites on cases where a final determination has been made of incorrect telephone numbers and/or after the survey team has not been able to reach the respondent or confirmed that the number is correct. An Abt Associates Field Team will utilize Field Management System (FMS) software to manage the field tracking and follow-up effort. Field staff will locate the respondent in the field, and once a respondent is located and agrees to participate, the respondent will use a cell phone, provided by survey field staff, to call into the Abt Associates telephone center to take the survey with a trained interviewer. Once the interview is completed, the locator will hand the $15 incentive payment to the respondent.4
The CATI program will record all refusals and interview terminations in a permanent file, including the nature, reason, time, circumstances, and the interviewer. This information will be reviewed on an ongoing basis to identify any problems with the contact script, interviewing procedures, questionnaire items, etc. Also, the refusal rate by interviewer will be closely monitored. Using these analyses, a “Conversion Script” will be developed which will be submitted later as a non-substantive change. This script will provide interviewers with responses to the more common reasons given by persons for not wanting to participate in the survey. The responses are designed to allay concerns or problems expressed by the telephone contacts.
Abt Associates will implement a refusal conversion plan in which each person selected for the sample who refuses to participate will be re-contacted by the contractor approximately one-to-two weeks following the refusal. The contractor will use the Conversion Script in an attempt to convince the individual to reconsider and participate in the survey. Only the most experienced and skilled interviewers will conduct the refusal conversions. Exceptions to refusal conversion will be allowed on an individual basis if for some reason the refusal conversion effort is deemed inappropriate.
There will be maintenance and regular review of outcome data in the reporting file so that patterns and problems in both response rate and production rates can be detected and analyzed. Meetings will be held with the interviewing supervisory staff and the study management staff to discuss problems with contact and interviewing procedures and to share methods of successful persuasion and conversion.
Non-Response Analysis for the Follow-up Survey. The actual difference between respondents and non-respondents on estimates will not be known. In this instance, nonresponse bias is typically explored using indirect measures. Should this study not reach the 80% response rate expected, we will complete nonresponse analysis using various demographic characteristics collected as part of the PTS. This comparison of the characteristics of the participants completing follow-up surveys versus non-respondents to the survey will be conducted to determine whether there is any evidence of significant non-response bias in those completing the follow-up surveys. Analysis of the characteristics of respondents and non-respondents to the survey should identify whether there is any evidence of significant non-response bias for key characteristics that could affect outcomes: age, gender, ethnicity, race, employment status prior to program participation, highest school grade completed, and services received. This analysis will suggest whether any weighting or other statistical adjustment needs to be made to correct for non-response bias in the completed sample.5
We will track response rates for the two follow-up surveys by site, as each survey is being administered on a rolling basis (at 18 and 36 months after random assignment). During data collection, if we find that in some strata response rates are low, every effort will be made to increase the response rates in those strata to reduce the nonresponse bias in overall estimates. We plan to keep track of types of nonrespondents like refusals along with reasons for refusal, unable to contact, unable to respond etc. The type of nonresponse and reasons for nonresponse might help in nonresponse bias analysis. The analysis will be conducted according to the OMB guidelines.
The size of the nonresponse bias in the sample respondent mean of a characteristic of interest is a function of the nonresponse rate and the difference between the respondent and nonrespondent population means. An estimate of the bias in sample mean based only on the respondents is given by
where is the mean based only on respondents, is the mean based on nonrespondents, is the sample size and is number of nonrespondents.
We plan to make comparisons between respondents and nonrespondents available for each of the four YPD sites. For example, the comparison of respondents and nonrespondents by gender will show whether proportionately more male YPD participants are responding to the follow-up survey than female participants. We will also look at some characteristics such as age and race for both respondents and nonrespondents. If there are substantial differences in response rates for race and/or ethnicity groups, then we will examine survey data to see whether there are differences in survey responses between respondents in different race/ethnicity groups or different age groups. If there are differences between these groups, a poststratification adjustment by race or age within each stratum may reduce bias due to nonresponse assuming that within these groups respondents are similar to nonrespondents. Depending on sample sizes in these groups, we may use poststratification adjustment within strata. Variance estimation will then be done using the poststratification option.
4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.
A pretest of the follow-up survey will be conducted of nine YPDP participants, to ensure the CATI script and online version are functioning properly and the data are being collected accurately. The pretest will consist of the entire survey process from sample management to tabulation of results. Any problems encountered during the pretest of the questionnaire will be resolved before the survey is put into the field.
Name and telephone number of individuals consulted on statistical aspects of the design:
Stephen Bell, Ph.D.
Abt Fellow and a Principal Associate/Scientist
Social and Economic Policy Division
Abt Associates
4550 Montgomery Avenue, Suite 800
Bethesda, MD 20815
Stephen_Bell@abtassoc.com
Carolyn J. Heinrich
Sid Richardson Professor of Public Affairs and affiliated Professor of Economics
Director, Center for Health and Social Policy
Lyndon B. Johnson School of Public Affairs
The University of Texas at Austin, P.O. Box Y
Austin, TX 78713-8925
Renée Spencer, Ed.D., LICSW
Associate Professor
Chair, Human Behavior Department Coordinator, SWRnet
Boston University School of Social Work
264 Bay State Road
Boston, MA 02215
rspenc@bu.edu
The agency responsible for receiving and approving contract deliverables is:
Employment and Training Administration
U.S. Department of Labor
Frances
Perkins Building
200 Constitution Avenue, NW
Washington, DC 20210
Person Responsible: Savi Swick, Contracting Officer’s Technical Representative
(202) 693-3382
All data collection and analysis will be conducted jointly by:
Capital Research Corporation, Inc.
1910 N. Stafford Street
Arlington, VA 22207
Person Responsible: John Trutko, Project Director
(703) 522-0885
The Urban Institute
2100 M Street, NW
Washington, DC 20037
Person Responsible: Erin McDonald, Co-Principal Investigator
(202) 261-5624
emcdonald@urban.org
Abt. Associates, Inc.
4550 Montgomery Ave # 800N
Bethesda, MD 20814-3343
Person Responsible: Karin Martinson, Co-Principal Investigator
(301)347-5726
George Washington University
Trachtenberg School of Public Policy and Public Administration
805 21st St NW, Washington, DC 20052
Person Responsible: Burt Barnow, Co-Principal Investigator
(202) 994-6379
1When the outcome variable is not continuous other statistical techniques, such as logit and probit analysis, can be used to provide estimates of the relationship. Although these approaches often provide better estimates of relationships, the equations are more difficult to interpret. When appropriate, we will use these more sophisticated techniques as well as the easier to interpret ordinary least squares regression analyses.
2 Howard S. Bloom (1995). “Minimum Detectable Effects: A Simple Way to Report the Statistical Power of Experimental Designs.” Evaluation Review, Vol. 19:5, 547-556. See also Larry L. Orr (1999). Social Experiments: Evaluating Public Programs with Experimental Methods. Thousand Oaks, CA: SAGE Publications.
3 The estimate for individual sites is based on 400 planned enrollments in three of the four YPDP sites; the fourth site has a slightly higher enrollment goal (433 enrollments), which yields only marginally different MDE.
4Abt Associates has used this method successfully with similar populations. For instance, Abt Associates is currently using this method to maximize response rates in Cook County, Illinois, as part of a survey being conducted in a study to measure the effects of child care subsidies for low-income families.
5 The use of indirect measures such as demographics to conduct nonresponse analysis is supported in the literature. See O’Neil, G. and J. Dixon (2005). Nonresponse bias in the American time Use Survey. ASA Section in Survey Research Methods (p2958-2966). [www.bls.gov/tus/papersandpubs.htm]; Groves, R.M. (2006). Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opinion Quarterly, 70, 646-675.; and Kasprzyk, D and Geisbreecht (2003). Reporting Sources of Error in U.S. Government Surveys. Journal of Official Statistics, 19(4), pp 343-363.
B-
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT 1995 |
Author | Administrator |
File Modified | 0000-00-00 |
File Created | 2021-01-31 |