Supporting Statement B
For the Paperwork Reduction Act of 1995: Approval for the Baseline Data Collection, Implementation Study Site Visits, and Staff Surveys for the Job Search Assistance (JSA) Strategies Evaluation
OMB No. 0970-0440
Submitted by:
Office of
Planning,
Research & Evaluation
Administration for Children & Families
U.S. Department of
Health
and Human Services
Federal Project Officer
Carli Wulff
Table of Contents
Collection of Information Employing Statistical Methods 2
B.1 Respondent Universe and Sampling Methods 2
B.2 Procedures for Collection of Information 3
B.3 Methods to Maximize Response Rates and Address with Non-response 5
B.4 Tests of Procedures or Methods to be Undertaken 7
B.5 Individuals Consulted on Statistical Aspects of the Design 7
Attachments
Baseline Information Form
Master Protocol and Topics by Respondent
JSA Staff Survey
JSA Staff Survey Screen Shots
JSA Supervisor Survey
JSA Supervisor Survey Screen Shots
Informed Consent Form
OMB 60-Day Notice
Survey Reminder Email Text
OMB approval was received in October 2013 for JSA Evaluation data collection instruments used as part of the field assessment and site selection process (OMB No. 0970-0440). Instruments approved in that earlier submission included the Discussion Guide for Researchers and Policy Experts, the Discussion Guide for State and Local TANF Administrators, and the Discussion Guide for Program Staff. Data collection with these previously approved instruments is complete.
This submission seeks OMB approval for three data collection activities that will be part of the JSA Evaluation:
Baseline data collection. This is for the collection of baseline data from TANF recipients at the time of enrollment in the study.
Implementation study site visits. This activity involves conducting site visits for the purpose of documenting the program context, program organization and staffing, the components JSA services, and other relevant aspects of the TANF program. During the visits, site teams will interview key administrators and line staff using a semi-structured interview guide.
JSA staff survey. This on-line survey, administered to TANF management and line staff involved in JSA activities, will be used as part of the implementation study to systematically document program operations and the type of JSA services provided across the study sites.
We submitted an additional information collection request under OMB #0970-0440 for a follow-up survey of study sample members and received approval in February 2016 .
The first step in identifying the respondent universe was establishing criteria for purposive site selection. In tandem with identifying potential contrasts of JSA program approaches, the research team developed criteria for site selection. These include:
Willingness to operate two (or possibly more) JSA models simultaneously
Willingness to allow a random assignment lottery to determine which individuals receive which models
Ability to conduct random assignment for a relatively large number of TANF recipients
Ability to create the desired models with fidelity and consistency—which may hinge on whether the site has policies and systems in place to ensure that tested services are implemented relatively uniformly in all locations that are participating in the evaluation
Assurance that the agency will prevent study subjects from “crossing over” to a different JSA model than the one to which they were assigned
Capability to comply with the data collection requirements of the evaluation
For this study, we are sampling based on a minimum detectable effect (MDE) of 0.02 standard deviations. This effect size is based on prior research evidence that suggests that any impacts of specific job search assistance components may be small. Using this level of MDE, we estimate a required sample of about 25,000. Given the size of the most promising sites, we assume that approximately 10 sites will be selected to participate in the evaluation, all implementing similar contrasting approaches to providing JSA services, with a total sample size of 25,000. A site is likely to be a locality with more than one TANF office. As of the time of this submission, August 2014, the sites to be included in the evaluation have not yet been identified.
The short baseline information form will collect basic identification, such as demographic and contact information. The form will be used to identify study participants for a possible follow-up survey and will capture demographic, educational, employment and other basic information at baseline. All individuals who are eligible for TANF employment-related services and who consent to participate in the study will be included in the data collection, and no sampling will be used. The baseline information form must be completed by individuals who agree to participate in the study. Sites will use their existing eligibility criteria to determine whether individuals are eligible for TANF and subject to the TANF work requirements. No attempt will be made to draw inferences to any population other than the set of units that responded to the data collection effort.
We plan to conduct semi-structured interviews of TANF supervisor and line staff involved in the provision of JSA services in each of the study sites including eligibility workers, case managers, job developers, data specialists, and job search workshop facilitators. We plan to interview an average of 10 staff in each of the 17 local TANF offices participating in the evaluation, for 170 interviews across all five sites. It will not be possible to interview all TANF supervisors and line staff. We will purposively select a sample of individuals to interview representing different JSA approaches (e.g. staff from each of the contrasting approaches), positions (e.g. eligibility workers, case managers, workshop facilitators), and locations (sites are likely to include more than one TANF office).
Unlike the implementation study interviews, we will survey all TANF program staff involved in each of five selected sites. The number of staff who will participate in the on-line survey will vary by agency size. Our current assumption for the survey is that it will be administered to 365 staff across the five sites.
The evaluation plans to randomly assign TANF recipients who are required to attend JSA services and who consent to participate in the study to contrasting approaches to providing these services in each of the ten sites. At minimum, two contrasting approaches will be compared, although the evaluation may include three contrasting approaches. Our goal is that all sites will implement similar contrasting approaches to providing JSA services. Outcomes of those randomly assigned will then be compared to estimate the difference in effectiveness between the two JSA service approaches. For example, outcomes for an approach in which TANF recipients searched for jobs on their own could be compared to an approach where group instruction on job search techniques and workplace skills were provided.
To estimate the effect of being in the more intensive treatment arm, we will compute the difference in mean outcomes between the standard intervention group and the randomized control group using pooled data from all grantees: . I here refers to the estimated impact, which is equal to the treatment group’s mean outcome ( ) minus the control group’s mean outcome ( . These impact estimates will involve survey and administrative (NDNH) data at the 6-month follow-up point. The difference outcomes between the two (or more) treatment arms’ outcomes represents the average impact of the “intent” to treat, or making the selected JSA services available, including a blend of impacts from participating and zero effects on nonparticipants in the treatment group. Although the simple difference in means is an unbiased estimate of the treatment’s effect, we will instead estimate ITT impacts using a regression model that adjusts the difference between average outcomes for treatment and control group members by controlling for exogenous characteristics measured at baseline. Controlling for baseline covariates reduces distortions caused by random differences in the characteristics of the experimental arms’ group members and thereby improves the precision of impact estimates, allowing the study to detect smaller true impacts. Regression adjustment also helps to reduce the risk of bias due to follow-up data sample attrition. We use the following, standard impact equation to estimate the effect of being given access to the selected JSA treatment, estimating the equation for the combined sample of all individuals in the basic and alternative treatment groups:
yi = α + δTi + βXi + εi
where
yi is the outcome of interest (e.g., earnings, time to employment);
α is the intercept, which can be interpreted as the regression-adjusted control group mean;
Ti is the treatment indicator (1 for those individuals assigned to alternative treatment group; 0 for the basic treatment group individuals);
δ is the impact of being in the alternative treatment group relative to the basic treatment group;
Xi is a vector of baseline characteristics;
β are the coefficients indicating the contribution of each of the baseline characteristics to the outcome;
εi is the residual error term; and
the subscript i indexes individuals.
We expect to use cluster robust standard errors, given the site-level clustering of sites.
The impact study staff have spearheaded recent advances in analytic techniques that will supplement the solely experimental analysis in order to capitalize on the basic strengths of a randomized trial to reveal more about “what works” in JSA programs. Abt’s methodological experts have developed an analytic tool that is expressly designed for situations such as this one, the Social Policy Impact Pathfinder (SPI-Path). One of the analytic tools in this suite of methodologies (and computer programs for their execution) is SPI-Path|Site, which investigates sources of impact variation across sites in a multi-level model framework with a multi-site experiment. It is designed to strengthen options for causal attribution when random variation in program features occurs within sites in addition to natural variation across sites. Non-experimental analyses of specific elements of a multi-faceted program can be made stronger by having some experimental evidence. The SPI-Path suite includes an analytic framework for evaluating the effects of variation in individuals’ exposure to treatment elements. SPI-Path|Individual extends the analysis established in Peck (2003), revisited in Peck (2013) and advanced in Bell and Peck (2013) and Harvill, Bell and Peck (2012) to expose more clearly “what works” about the specific treatment experiences of individual JSA participants. To the extent that program experiences can be predicted by available baseline data, including both individual- and site-measured variables, the impact that they have can be estimated in a way that uses the experimental design to minimize the influence of selection and other biases. Anticipating application of this type of analysis, we designed baseline data collection to include some variables that we deem important to capturing the later variation in program experiences in which individuals participate.
All evaluation materials designed for JSA sample members will be available in English and Spanish.
The methods to maximize response rates and data reliability are discussed first for the baseline information form and then for the site visit interviews and staff survey.
Response rates. The methods to maximize response rates for the baseline information form will be based on approaches that have been used successfully in many other random assignment studies to ensure that the study is clearly explained to both study participants and staff and that the forms are easy to understand and complete. The approaches taken will be fully reviewed and approved by the IRB of Abt Associates (the lead research firm). The forms and procedures should minimize refusal rates and maximize voluntary participation in the program. Program staff will be thoroughly trained on how to address study participants’ questions about the forms and their questions. Staff will also be provided with a site-specific operational procedures manual prepared by the research team, contact information for members of the research team, and detailed information about the study.
Furthermore, the forms are designed to be easy to complete. They are written in clear and straightforward language, at the sixth-grade reading level, with closed response categories. The time required for participants to complete both forms is estimated to be 12 minutes, on average. In addition, the forms will be available in Spanish to accommodate Spanish-speaking participants. Grantee staff will administer the form orally to individuals with low literacy.
We do not have concerns about nonresponse bias. The baseline information form will be completed by all study participants after informed consent is given. Because the identifying information is needed for random assignment, completion of the BIF will be required to be in the study (although individuals can skip certain items if they desire). All study participants will complete the BIF, so non-response bias should not be an issue. There may be some individuals who do not consent to be in the study, although our past experience indicates this will be very low. However, because these individuals will be in the TANF program, if data is available, we could potentially examine the characteristics of those in the study compared to those who do not consent and whether there appeared to be a bias there. We will consider this as we develop the random assignment procedures in each site, and will work to identify those who choose not to participate in the study in case we need to examine the representativeness of our sample.
Data reliability. The baseline information form (BIF) is unique to the current evaluation and will be used across all evaluation sites. Using the same forms across all sites will ensure consistency in the collected data. The forms will have been reviewed extensively by project staff and staff at ACF and are similar to OMB-approved BIFs used in other ACF studies, including the Innovative Strategies for Increasing Self-Sufficiency Evaluation. They are also consistent with other OMB-approved BIFs used in studies for the Department of Labor, including the Green Jobs and Health Care Impact Study and the H1-B Technical Skills Training Grant. JSA line staff will receive training covering each item on the baseline information form to ensure staff understand each item and explain and record the information accurately. In addition, each participating site will be provided with access to a web-based system, the PTS, for entering the information from the baseline information form. To ensure complete and accurate data capture, this platform will flag missing data or data outside a valid range.
Response rates. The strategy to collect qualitative data during site visits will ensure that response rates are high and that the data are reliable. Site visit team members will begin working with site staff well in advance of each visit to ensure that the timing of the visit is convenient. Because the visits will involve several interviews and activities each day, there will be flexibility in the scheduling of specific interviews and activities to accommodate the particular needs of respondents and site operations.
Data reliability. Strategies to ensure that the data are reliable and as nearly complete as possible include flexibility in scheduling of visits and the assurance given to respondents of privacy of the information that they provide. Furthermore, the neutral tone of the questions in the data collection protocols and the absence of sensitive questions, along with the training of the site visitors, will facilitate a high degree of accuracy in the data. In addition, shortly after each site visit, the site visit team members will synthesize the data from each interview, observation, and group discussion as required to complete a structured site visit summary. Because most questions will be asked of more than one respondent during a visit, the analysis will allow for the triangulation of the data so that discrepancies among different respondents can be interpreted.
Response rates. To ensure that questions are answered in a timely manner and that accurate data are collected, we will establish an in-house “survey support desk” with an email address and a toll-free telephone number to assist respondents with completing the survey. The support desk will carefully monitor response rates and data quality on an ongoing basis as well as serve as the point of contact if respondents have questions. Once the field period begins for the web-based survey, we will send periodic email reminders (see Attachment I) to respondents, beginning two weeks after the field period begins. We will also contact the site liaison, when appropriate, to facilitate getting respondents to complete their surveys.
Data reliability. The survey support desk is designed to help improve the completeness and quality of data. Reminder emails (see Attachment I) from the support desk as well as reminders from the site liaison will encourage high participation and completion rates. Assurances of privacy for the information provided in the survey will also encourage complete and accurate responses. The support desk will also monitor data quality while the survey is underway and will be available to answer questions from respondents and clarify questions.
The research team did not pretest the baseline information form. However, the items on JSA baseline information form are adapted from similar studies, providing estimates of the time necessary for completion and ensuring the items provide high quality data. Other studies using similar items and forms, upon which this time estimate is based include the Innovative Strategies for Increasing Self-Sufficiency Evaluation (OMB # 0970-0397) and the Health Professions Opportunity Grants Impact Evaluation (OMB # 0970-0394) both conducted for ACF and the Green Jobs and Health Care Impact Evaluation (OMB # 1205-0481) conducted for the U.S. Department of Labor.
In designing the JSA staff survey, the research team developed items based on those used in previous studies including the Innovative Strategies for Increasing Self-Sufficiency Evaluation and the Health Professions Opportunity Grants Impact Evaluation conducted for ACF, as well as other a range of other welfare-to-work evaluations (see Bloom, Hill, Riccio, 2003). However, given that these earlier staff surveys involved somewhat different populations and programs, we conducted a pretest of the JSA staff survey to estimate survey length, assess respondents’ understanding of the survey questions, and identify improvements in the flow and structure of the instrument. The survey was pretested with eight individuals from TANF agencies and JSA service providers in two areas: Sonoma County, CA (5 respondents) and Salt Lake City, UT (3 respondents). The five Sonoma County respondents are staff at SonomaWorks, the TANF program in Sonoma County. Two of the five respondents work for the Sonoma County Human Services Department Employment and Training Division, and the other three work for Goodwill Industries of the Redwood Empire, a non-profit organization. Both organizations provide job search services to TANF recipients in Sonoma County. In Utah, we recruited the three pretest respondents from the Utah Department of Workforce Services. The research team emailed the survey instrument to pretest participants along with instructions for printing out and completing the survey and denoting start and end times. After completing the survey, the research team called each respondent to discuss perceptions of the clarity and flow of survey items, ease of completion, and time requirements. After receiving feedback, the research team cut a net total of 27 questions to reduce redundancy and to shorten the survey to stay within the proposed administration time of 30 minutes.
Consultations on the statistical methods used in this study have been undertaken to ensure the technical soundness of the research. The following individuals were consulted in preparing this submission to OMB:
Ms. Erica Zielewski
Mr. Mark Fucello
Ms. Naomi Goldstein
Ms. Nicole Constance
Ms. Karin Martinson
Dr. Stephen Bell
Ms. Laura Peck
Mr. Jacob Klerman
Mr. Howard Rolston
Karen Gardiner
Ms. Michelle Derr
Ms. Alicia Meckstroth
Bell, S.H., & L.R. Peck (2013). Using Symmetric Predication of Endogenous Subgroups for
Causal Inferences about Program Effects under Robust Assumptions: Part Two of a Method
Note in Three Parts. American Journal of Evaluation, 34(2). DOI: 10.1177/1098214013489338
Bloom, Howard S., Carolyn J. Hill & James A. Riccio. (2003). “Linking Program
Implementation and Effectiveness: Lessons from o Pooled Sample of Welfare-to-Work
Experiments.” Journal of Policy Analysis and Management, 22(4), 551-575. DOI:
10.t0O2/pam.tO154
Harvill, E., S.H. Bell, & L.R. Peck (2012). On Overfitting in Analysis of Symmetrically-
Predicted Endogenous Subgroups from Randomized Experiment Samples. Presented at the
Annual Fall Research Conference of the Association for Public Policy Analysis and
Management, Baltimore, MD.
Peck, L.R. (2003). Subgroup Analysis in Social Experiments: Measuring Program Impacts Based
on Post Treatment Choice. American Journal of Evaluation, 24(2), 157–187. DOI:
10.1016/S1098-2140(03)00031-6
Peck, L.R. (2013). On Analysis of Symmetrically-Predicted Endogenous Subgroups: Part One of
a Method Note in Three Parts. American Journal of Evaluation, 34(2), 225–236. DOI:
10.1177/1098214013481666
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2021-01-22 |