HPOG Impact Supporting Statement_Part B_revised to ACF 10.2.12 AW 10.3.2012

HPOG Impact Supporting Statement_Part B_revised to ACF 10.2.12 AW 10.3.2012.docx

Health Profession Opportunity Grants (HPOG) program

OMB: 0970-0394

Document [docx]
Download: docx | pdf



Supporting Statement for OMB Clearance Request


Part B


Health Profession Opportunity Grants Impact Study (HPOG-Impact)

0970-0394








Original: June 7, 2012

Revised: October 2012



Submitted by:

Office of Planning,
Research & Evaluation

Administration for Children & Families

U.S. Department of Health
and Human Services




Federal Project Officers:

Molly Irwin and Hilary Forster


Table of Contents



Attachments:

Instrument 1: Supplemental Baseline Questions

Attachment A: References

Attachment B: Informed Consent Form

Attachment C: 60 Day Federal Register Notice

Attachment D: Sources and Justification for the Supplemental Baseline Questions

Attachment E: Logic Models Justifying the Inclusion of the Child Outcomes Roster

Attachment F: HPOG Impact Analysis Plan: Estimating 15 and 30-36 Month Impacts

Attachment G: Constructs for HPOG-Impact Data Collection Efforts

Attachment H: Screen Shots of the PRS Data Collection System





Part B: Statistical Methods

Part B of the Supporting Statement for the Health Profession Opportunity Grant Impact Study (HPOG-Impact) – sponsored by the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services (HHS) – considers the issues pertaining to Collections of Information Employing Statistical Methods. Abt Associates (Abt), is the prime contractor for the study. HPOG-Impact will evaluate postsecondary career pathway programs, focused on the healthcare sector, that target economically disadvantaged families and individuals.

This submission seeks clearance for one data collection instrument – the Supplemental Baseline Questions, to be integrated into existing PRS data collection activities. Subsequent OMB submissions will seek clearance for follow-up data collection activities.

B.1 Respondent Universe and Sampling Methods

1. PRS Baseline Data Collection and 2. Supplemental Baseline Questions. The respondent universe consists of all individuals who are randomized to participate in the HPOG program or the control group during the study intake period (between December 2012 and March 2014). To be randomly assigned for the chance of participating in HPOG, an individual must apply to, and be accepted at one of the 20 HPOG grantee programs and provide affirmative consent to participate in the study. Therefore, the respondent universe is the subset of eligible individuals who apply to the 20 HPOG programs and agree to participate in the study. These individuals will be asked to complete the baseline data collection, which includes the PRS and the Supplemental Baseline Questions, as described in Part A.

We acknowledge that the full universe of interest does include people who do not agree to participate in the study. These people are part of this universe because in the absence of the evaluation, they would have a chance to receive HPOG program services. However, it is not possible to include them in the evaluation since human research subject protection regulations require that we obtain active consent to include individuals in the study. Fortunately, past experience with similar studies suggests that only a very small share of eligible participants will not agree to participate in the study. Therefore, we believe this is a small but necessary limitation of the study design.



3. 15-Month Follow-up Survey. The respondent universe for the 15-Month Follow-up Survey is the universe of study participants, in both the treatment and control groups. We do not plan to select a survey sample for follow-up survey data collection.

4. Grantee Survey. The Grantee Survey will be fielded to the universe of 20 grantees that are part of HPOG-Impact; no sample will be drawn. That is to say, as described in Part A, the study will not include those grantees that operate Tribal HPOG programs, those part of a university evaluation partnership where individual-level data collection is taking place, or, the three HPOG sites that are also part of the ISIS evaluation.

5. Case Studies of Selected HPOG Grantees. A subset of grantees that are part of HPOG-Impact will be selected for more intensive case studies of specific program components. Specific criteria for this selection will include grantees implementing program enhancements for the systematic variation component of the study and other grantees implementing similar components and/or other program components of special interest. Further information regarding the sampling plan will be included in a future information collection request.

6. 30-36-Month Follow-up Survey. The respondent universe for the 30-36-Month Follow-up Survey is the universe of study participants, in both the treatment and control groups. We do not plan to select a survey sample for follow-up survey data collection.

7. Long-term Child Outcomes Follow-up Data Collection. Using the child roster collected, we will draw a sample of children for future follow-up data collection. The sample universe, selection criteria and ultimate size, by age categories, will be determined based on the data collected in the child roster. Further information regarding the sampling plan will be included in a future information collection request.

B.2 Procedures for Collection of Information

        1. B.2.1 Sample Design

For this evaluation, we expect the study sample to include approximately 10,250 individuals who apply to participate in the HPOG program operated by approximately 20 HPOG grantees. This section describes how the sample will be selected: no statistical sampling will be conducted.

Of the 32 grantees that received funding from the HPOG program, 20 will be part of the impact evaluation because they are not engaged in other evaluation research.1 The HPOG grant announcement indicated that HPOG grantees were expected to participate in an evaluation. Upon receipt of OMB approval, some of the eligible grantees will be asked to volunteer to implement and test an enhancement to their program. Our goal is to identify between three and six grantees to participate in this portion of the impact study. Program enhancements will be selected from the study team’s review of the literature, which is designed to identify effective program components. Upon receipt of OMB approval, grantees will be selected purposively for this portion of the study based on their interest in testing an approved enhancement, the feasibility of implementing the program enhancement within project time constraints, and the expected number of eligible applicants who could be randomly assigned.

All individuals who are deemed eligible for HPOG training and agree to participate in the study during the study period at the participating grantees will be randomly assigned. In most sites, individuals will be randomly assigned to one of two groups: an HPOG treatment group or a no-HPOG control group. In sites that agree to test an approved program enhancement, individuals will be randomly assigned to one of three groups: an enhanced HPOG treatment group, a standard HPOG treatment group, or a no-HPOG control group.

During the study sample intake period, eligible applicants will be informed about the overall HPOG evaluation project and the HPOG-Impact evaluation, including random assignment. Eligible applicants must sign an Informed Consent Form before undergoing random assignment when they then have the chance to be assigned to the treatment group and invited to participate in HPOG. Staff will then administer the PRS and the Supplemental Baseline Questions. After individuals complete the Informed Consent Form and provide baseline data, they will be randomly assigned.

The study intake period will vary across grantees, beginning November 2012 and ongoing through February 2014. By design, all individuals who agree to participate in the study will participate in all baseline data collection.

In the 20 sites that participate in the study, we anticipate a baseline sample of:

  • 3,000 individuals in the no-HPOG control group;

  • 7,250 total individuals in an HPOG treatment group, including:

  • 6,000 individuals in the HPOG treatment group, and;

  • 1,250 individuals in the enhanced HPOG treatment group, clustered in the grantees that agree to test the enhancement selected for the study.

In total, we anticipate that a sample of 10,250 individuals will complete the baseline survey and be randomly assigned to one of the three groups. In addition, we anticipate including all 10,250 sample members in the follow-up data collection efforts for individuals in the study sample: (1) the 15-Month Follow-up Survey, and (2) the 30-36-Month Follow-up Survey.

        1. B.2.2 Estimation Procedures

The baseline data to be collected for the evaluation will be used to describe the study sample, to define subgroups for analysis, to provide baseline measures of outcomes to use as covariates in impact estimates to improve precision, and to reduce the bias from missing data. The Minimum Detectable Impacts (MDIs) presented in the next section assume the impacts of HPOG will be estimated using regression models that include baseline variables as covariates, and outcome variables constructed from the data collected from the NDNH and the 15-Month Follow-up Survey.

HPOG-Impact is designed to estimate (1) the impacts of the HPOG program enhancements selected for the study in a small number of grantees, and (2) the impacts of program components that vary across grantees/programs, both naturally and systematically. In estimating the impacts of the HPOG program and HPOG program enhancements, the study team can rely on the experimental design:

  • The impacts of HPOG programs will be estimated by comparing the HPOG treatment group to the no-HPOG control group;

  • The impacts of HPOG program enhancements will be estimated by comparing the enhanced HPOG treatment group to the standard HPOG treatment group.

Program components have not been randomly assigned to HPOG programs: the grantees designed their own programs and selected the components to include, subject to the requirements of the grant. To estimate the impacts of program components that some but not all HPOG program include, the study team will follow two main strategies, both of which rely on this natural variation across HPOG programs, and are elaborated in Attachment F. One strategy is what we have called a “Pathways Analysis.” This strategy uses exogenous baseline characteristics to identify individual-level subgroups associated with program experiences; it then conducts a conventional experimental subgroup analysis on the basis of those subgroups. The second strategy follows the lead of Bloom, Hill, and Riccio (2003) to use multi-level modeling to estimate the effects of program components. This site-level analysis capitalizes on the natural variation – specifically of the selected program enhancements – across HPOG grantees, bolstered by the experimental design, to tease out the relative effects of various program components.

        1. B.2.3 Degree of Accuracy Required

If the study is able to randomly assign 7,250 individuals to the study’s treatment group and 3,000 individuals to the study’s control group, as planned, then HPOG-Impact will have enough power to detect the impacts of the HPOG program if the impacts are similar to the effects from the best known evaluation of vocational training—the National Job Training Partnership Act (JTPA) Study. We estimate that the study will be able to detect an average impact on quarterly earnings of $202 per person.2 This impact is roughly comparable to the estimated impacts of JTPA: our calculations suggest that the impact on average quarterly earnings from the National JTPA Study was $182 dollars, when inflated to current dollars and averaged between men and women.3

If 2,500 individuals can be randomized in selected sites to the HPOG program or the enhanced HPOG program, as planned, then our estimates suggest that the study will be able to detect an average impact of the program enhancement on quarterly earnings of $363 per person. The relative effects of the different approaches to training estimated for the U.S. Department of Labor’s Individual Training Account (ITA) Demonstration are somewhat less than $363. For example, the evaluation found that providing intensive case management and direction in terms of the training program selected, relative to simply offering individuals a training voucher and the opportunity to choose a training program, produced earnings impacts of $332 per quarter. This is only slightly less than the estimated MDIs for the program enhancement to be tested in this study. Therefore, our estimates suggest that the evaluation will need to select a program enhancement that is slightly more effective than the enhancement tested in the ITA Demonstration for the evaluation to have adequate power to detect its effect.

Lastly, it would be helpful to be able to assess the precision of the estimates that the study will produce on the effects of program components. Because this analysis will exploit variation in program impacts and program components across sites, evidence on the amount of variation across sites would be necessary to conduct a credible power analysis. However, there is very little evidence in general on variation in training effects across sites in the literature, and there is no reason to believe that any of the available evidence is informative about the amount of variation we would expect to find across HPOG grantees. Therefore, any attempt to estimate the precision of the estimated effects of program components across sites would be highly speculative.

However, based on evidence from the National Evaluation of Welfare-to-Work Strategies (NEWWS) and the evaluation of the Greater Avenues for Independence (GAIN) program, Greenberg, Meyer, and Wiseman (1994) conclude that at least 20 sites (and in some cases many more than 20 sites) are necessary to have adequate statistical power in multi-site impact studies in establishing a causal link between how programs are operated and their effects on participant outcomes. This suggests that the sample targets for this study will be adequate to detect the effects of various program components if the effects of these components are relatively large.

A more credible and accurate power analysis for the natural variation portion of the study (two-way random assignment plus statistical models of variation across sites) will be conducted after the study has begun (in particular, after the Grantee Survey has been fielded). The Grantee Survey and site visits will provide the additional evidence to produce credible estimates of MDEs: estimates of the correlation between different program components to be included in the analysis model. If the program components are highly correlated, then this can result in multicollinearity, large standard errors, and reduced statistical power for detecting the independent effect of any particular program component holding other components constant. In the analysis based on systematic variation (three-way random assignment, including assignment to an enhanced HPOG program), random assignment ensures a zero correlation between the enhancement being randomized and other program components being implemented. But for the analysis based on natural variation, the correlation may not be zero, and the power analysis requires estimates of the correlation.



        1. B.2.4 Who Will Collect the Information and How It Will Be Done

To enroll the sample in the study, participating HPOG grantees will:

  • Conduct an Orientation and Information Session. Participating grantees will conduct orientation and information sessions either in a group setting or individually. Participants will be given an opportunity to ask questions so that they will understand what participation in the study entails. The HPOG-Impact study team will conduct an in-person training with HPOG program staff involved in the data collection process, so that program staff understand and are able to explain all aspects of the study clearly.

  • Obtain Informed Consent. Eligible applicants to participating HPOG programs will be told that they must sign an Informed Consent form to agree to be in the research study in order to have the chance to have access to the HPOG program. Eligible applicants who complete this form affirmatively will be considered study participants. As shown in Attachment B, the consent form will explain that:

  • Study participants will be asked for information by researchers in the future and to give permission for the study team to request administrative data on employment and earnings.

  • The program will use random assignment to determine which eligible applicants will be invited to participate in the HPOG program.

  • Collect Baseline Information, including the Supplemental Baseline Questions. Once an eligible applicant reads and signs the Informed Consent Form, program staff will collect baseline data on the PRS administrative data system, including the additional supplemental items. After the PRS and Supplemental Baseline Questions are answered, the PRS will randomly assign all program applicants to the treatment or control group (or, for grantees that agree to test the program enhancement, to the enhanced treatment group). Sites may choose to inform individuals of their random assignment status during the initial intake visit or shortly afterwards. We estimate that additional intake activities associated with HPOG-Impact will take roughly 15 minutes to complete, as noted in Part A.

In addition, the study will involve collecting other data that we will submit for approval in a later OMB package. In that package, we will explain specifically how the following data will be collected:

  • 15-Month Follow-up Survey. The 15-Month Follow-up Survey will involve a telephone-based computer-aided interview, with in-person follow-up to ensure a high response rate.

  • Grantee Survey. We expect that the Grantee Survey will be a web-based survey with telephone and in-person follow-up as needed.

  • Case Studies of Selected HPOG Grantees. Telephone-based and in-person field research will inform the case studies of selected HPOG grantees.

  • 30-3- Month Follow-up Survey. The 30-36-Month Follow-up Survey will involve a telephone-based computer-aided interview, with in-person follow-up to ensure a high response rate.

  • Long-term Follow-up Data Collection on Children. Information on children’s outcomes will be obtained by (a) asking for parental reports of children in a follow-up survey, (b) obtaining information from school or teacher reports, and (a) possibly also by observing the children themselves. Specific procedures have yet to be developed, but will be with the input of the team’s and other experts as well as on baseline information collected. Attachment E discusses the outcomes and specific measures that may be assessed.

        1. B.2.5 Procedures with Special Populations

To ensure participants can understand each of the documents, the Informed Consent Form and PRS data elements, including the Supplemental Baseline Questions, are designed at an 8th-grade readability level. The HPOG team will provide a Spanish version of the Informed Consent Form and will work with sites on ways staff can assist where translation of other data collection instruments may be needed.

B.3 Methods to Maximize Response Rates and Deal with Non-response

All individuals who agree to participate in the evaluation must complete all baseline data collection in order to have the opportunity to be randomly assigned to the HPOG program. Therefore, a response rate of 100 percent is expected at baseline. With respect to the long-term administrative data follow-up (i.e., data from the NDNH), we expect a near 100 percent response rate because of the nature of the data and matching process.



With respect to the project’s future data collection efforts, we anticipate response rates to the 15-month follow-up survey of 80 percent. Based on previous experience, it is likely that response rates will be higher among treatment group subjects than control group subjects. The evaluation team will use methods that maximize response rates in surveys as described below.. These proposed methods will include offering respondents $30 as a token of appreciation to improve cooperation at follow-up as described in Part A, as well as other strategies not involving cash incentives.

The expectation that the offer of $30 in appreciation for survey completion will improve response rates is based upon the literature. Although the literature is somewhat mixed, incentives have been shown to have positive effects on response rates, as shown in meta-analyses (Church 1993; Edwards et al. 2002, 2005; Singer et al. 1999a) and experiments (James and Bolstein 1992; Shettle and Mooney 1999; Singer, Groves, and Corning 1999b; Singer, Van Hoewyk, and Maher 2000; Warriner et al. 1996).

It is not expected that the $30 token of appreciation will be a cause of non-response bias. The likelihood of non-response bias due to the offer of an incentive will be greatest when the financial impact of an incentive varies among the sampled units: those with the least income should experience the greatest utility from an incentive payment and be consequently more likely to respond. In the present case, there is expected to be limited variation in the economic status of HPOG recipients given that program eligibility is limited predominantly to TANF recipients and other low-income individuals.

We will also employ other strategies to maximize response rates and deal with non-response. Site staff will be trained on data entry of subject information at enrollment to ensure that contact information is recorded accurately. In addition, contact information received will be reviewed on a monthly basis to ensure that entries are complete and keying errors, such as less than 5 digit zip codes and misspelled/‌abbreviated city names, are absent. These strategies will minimize non-response due to inability to contact HPOG applicants due to incorrect or missing contact information.

Recognizing that subjects may move post-enrollment, we will conduct contact information updates at 6 months post-enrollment and immediately prior to the 15-month survey. (Similar procedures will apply to the optional 30-36 month survey.) Subjects’ contact information will be updated via LexisNexis batch look-up. The updated contact information will be then be used for a mailing to subjects to verify or update their contact information. Verification and updating will take place by the contact information update/verification card enclosed with the mailing or via a secure website. These efforts will reduce non-response due to outdated contact information. Subjects will receive a prenotification letter prior to survey contact attempts. Pre-notification letters have been widely demonstrated to be associated with increased response rates (De Leeuw et al. 2007; Goldstein and Jennings 2002; Hembroff et al. 2005; Link and Mokdad 2005; Singer et al. 2000; Yamarino, Skinner, and Childers 1991). A pre-notification letter will be developed for this study and included in a future ICR; an example is included here as Attachment I

To maximize response rates while controlling costs, a sequential mixed mode design will be used with telephone and in-person cell phone data collection modes (Beebe et al. 2005; Brambilla and McKinlay 1987; Dillman et al. 2009; Fowler et al. 2002; Millar and Dillman 2011). The initial mode of contact will be an extensive set of up to 20 telephone contact attempts to reach subjects. In addition, skilled telephone interviewers will attempt to convert soft refusals. Cases that are not responsive to telephone mode will be transferred to cell phone data collection. Field staff will contact the respondent and arrange an interview time. Field staff members will then visit the respondent and make an inbound cell phone call from the staff member’s cell phone to the call center handling the survey. The survey will then be conducted by the interviewers at the call center. This design preserves much of the response rate advantages of an in-person design while reducing costs considerably: field staff can be trained remotely rather than at a central facility; lower levels of supervision are required because interviews can be monitored from the call center; the minimum assignment of two field staff per site in order to prevent complete conflation of interviewer effect with site variation can be waived due to the user of central interviewing.



B.4 Tests of Procedures

The Supplemental Baseline Survey Questions are either identical or similar to questions used in previous Abt Associates or national surveys. As such, they have been thoroughly tested on large samples. In addition, Abt Associates will conduct a pretest on both forms with a sample of fewer than nine current HPOG program participants.

B.5 Individuals Consulted on Statistical Aspects of the Design

The individuals listed in Exhibit B5.1 below made a contribution to the design of the evaluation.

Exhibit B5.1.

Name

Role in Study

Dr. Alan Werner

Project Director

Dr. Stephen Bell

Principal Investigator

Dr. Laura R. Peck

Impact Study Lead

Dr. Rob Olsen

Impact Study Team member

Dr. Larry Hedges

Technical Working Group member

Dr. Carolyn Heinrich

Technical Working Group member

Dr. Jeff Smith

Technical Working Group member

Dr. David Judkins

Project Quality Advisor

Dr. Howard Rolston

Key staff on ISIS evaluation

Dr. David Fein

Key staff on ISIS evaluation



Inquiries regarding the statistical aspects of the study’s planned analysis should be directed to:


Dr. Alan Werner Project Director

Dr. Steven Bell Principal Investigator

Dr. Molly Irwin Federal Contracting Officer’s Representative (COR) Administration on Children and Families, US DHHS

1 Some grantees are engaged in evaluation activities through partnerships with universities, and the Tribal grantees are being evaluated by NORC, on other contracts that are part of the HPOG research portfolio.

2 This estimate assumes 80 percent power, an alpha level of 5 percent (using a two-tailed test), an R-square of .20, a sample of 450 study participants randomized to treatment or control in each of 20 sites, a treatment-control ratio of 2:1, and a standard deviation in earnings equal to the standard deviation from the National JTPA Study.

3 Orr, L.L., Bloom, H.S., Bell, S.H., Lin, W., Cave, G., and Doolittle, F. (1996). Does Job Training for the Disadvantaged Work? Evidence from the National JTPA Study. Washington, DC: Urban Institute Press.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
AuthorKatheleen Linton
File Modified0000-00-00
File Created2021-01-30

© 2025 OMB.report | Privacy Policy