U.S. Department of Health and Human Services
Administration for Children and Families
Office of Planning, Research and Evaluation
MULTI-SITE EVALUATION OF FOSTER YOUTH PROGRAMS
Expiration Date: 8/31/2006
Office of Management and Budget
Clearance Package Supporting Statement
Renewal
July 2006
CONTENTS
Explanation of circumstances that make the data collection necessary 2
How, by whom, for what purpose, and how frequently the information is to be used 3
Consideration of the use of improved information technology
to reduce respondent burden 9
4. Efforts to identify duplication 10
5. Minimizing impact on small business or other small entities 10
6. Consequences of less frequent data collection 10
7. Special circumstances 11
8. Description of outside consultation efforts 11
9. Explanation of decision to provide gifts to respondents 11
10. Description of assurance of confidentiality and nature of response 12
11. Sensitive questions 14
12. Estimates of respondent burden 16
13. Total annual cost burden to respondents 17
14. Estimates of annualized costs to the federal government 17
15. Explanation of reasons for program changes or adjustments 18
16. Tabulations, statistical analyses, and publication plans 18
17. Approval to not display the OMB expiration date 29
18. Explanation of each exception to certification for Paperwork Reduction Act submissions 29
References 30
COLLECTION OF INFORMATION EMPLOYING STATISTICAL
METHODS
Description of the potential respondent universe and sampling 31
Procedures for collection of information 33
Methods to maximize response 36
Pretest procedures and results 37
Statistical consultation 38
Appendices:
Appendix A Youth survey instrument
Appendix B Caseworker survey
Appendix C Process study protocols
Appendix D Youth questions by source
Appendix E Federal Register announcement
Appendix F Technical Work Group members
Appendix G IRB approval statement
Appendix H Confidentiality pledges
Appendix I Introductory letters to caseworkers
Appendix J Introductory letter to youth
Overview of Extension
This OMB Clearance Package submission is an extension of an earlier OMB submission (OMB approval number 0970-0253). In order to reach the targets for data collection for the impact study component of the evaluation, data collection needs to continue beyond the OMB expiration date (August 31, 2006). An additional site is being added to the evaluation. The scope and design of the data collection activities in this new, fifth site will not differ from activities in the existing program sites.
A. JUSTIFICATION
A1. Explanation of circumstances that make the data collection necessary
The Foster Care Independence Act (FCIA) of 1999 (Public Law 106-169) amended title IV-E of the Social Security Act to create the John Chafee Foster Care Independence Program, giving states more funding and greater flexibility in providing support for youth making the transition to independent living. The FCIA doubles federal independent living services funding to $140 million per year, allows states to use up to 30 percent of these funds for room and board, enables states to assist young adults 18-21 years old who have left foster care, and permits states to extend Medicaid eligibility to former foster children up to age 21. State performance is a much higher priority under the FCIA than under earlier iterations of federal policy in this area. Under FCIA, the U.S. Department of Health and Human Services (DHHS) is required to develop a set of outcome measures to assess state performance in managing independent living programs and states are required to collect data on these outcomes. In addition, the FCIA requires that funding under the statute be set aside for evaluations of promising independent living programs:
The Secretary shall conduct evaluations of such State programs funded under this section as the Secretary deems to be innovative or of potential national significance. The evaluation of any such program shall include information on the effects of the program on education, employment, and personal development. To the maximum extent practicable, the evaluations shall be based on rigorous scientific standards including random assignment to treatment and control groups. The Secretary is encouraged to work directly with State and local governments to design methods for conducting the evaluations, directly or by grant, contract, or cooperative agreement (Title IV-E, Section 477 [42 U.S.C. 677], g, 1).
Through a competitive process, the Administration for Children and Families (ACF) contracted with the Urban Institute (UI) and two subcontractorsthe National Opinion Research Center (NORC) and Chapin Hall Center for Childrento carry out this mandate. The data collection methods discussed in the sections that follow are key components of the evaluation provided for in the FCIA. Little is known about the functioning of youth who age out of foster care, let alone the effectiveness of independent living programs. Research on youth who age out of foster care and the services intended to help them in this transition is briefly reviewed below to help illustrate the need for rigorous evaluation of independent living services.
The rationale for the FCIA and its predecessors is the belief that youth who age out of foster care encounter serious barriers to living self-sufficiently. This belief is based on a small number of prospective follow-up interview studies conducted over several decades (Collins, 2001; Cook, Fleishman, & Grimes, 1991; Courtney, Piliavin, Grogan-Kaylor, & Nesmith, 2001; Festinger, 1983; McDonald et al., 1996). Reviews of this literature suggest that foster youth aging out of care have poor prospects, including limited education and employment experience, relatively poor mental and physical health, and a relatively high likelihood of experiencing unwanted outcomes such as homelessness, incarceration, and pregnancy out of wedlock (Collins, 2001; McDonald et al., 1996). In addition, researchers have recently begun, with federal financial support, to use administrative data on wages (Dworsky & Courtney, 2001; Goerge et al., 2001) and public assistance utilization (Dworsky & Courtney, 2001) to examine the post-foster care economic well-being of youth who age out of care. Their findings confirm those of the survey researchers regarding the marginal economic prospects of this population. The consistency of these findings over time, using distinct forms of data, and across a number of different jurisdictions, provides support for the need for independent living services.
Federal and state expenditures of over $1 billion during the past 17 years on independent living programs reflect society’s commitment to achieving this goal. Unfortunately, these expenditures have not led to concomitant growth in empirical information about what kinds of independent living services are effective at helping foster youth to live self-sufficiently. Only a focused and sustained program of rigorous evaluation research will remedy this situation. This research will need to involve experimental designs and better measurement of both the interventions and outcomes of interest. The mandate and funding for rigorous evaluation found in the Foster Care Independence Act is a first step.
A2. How, by whom, for what purpose, and how frequently the information is to be used
UI and its subcontractors, NORC and Chapin Hall Center for Children, were awarded two contracts from ACF—a study to conduct an evaluability assessment and develop an evaluation design for independent living programs (ILPs) and, subsequently, a contract to conduct an impact evaluation. Together with its subcontractors, UI will implement the data collection and conduct analyses for ACF. The information collected will be used by ACF and the States to inform decisionmaking about independent living programs and policies.
Research Questions
The study is designed to answer the following questions:
How do the outcomes of youth randomly assigned to the identified interventions compare with those of youth who are assigned to “services as usual?”
For the identified programs, what are the features of these programs that are likely to influence their impact on youth clients?
How are these services implemented?
To what extent might these programs be adapted to other locales?
What are the barriers to implementation?
The conceptual design and framework for the study address the above questions and build on the work completed during the evaluation design and evaluability assessment. They are consistent with the parameters laid out in the legislation requiring the evaluation. First and foremost, within the constraints of the research budget and the operational context of existing programs, we must conduct an evaluation that can address How do the outcomes of youth assigned to the identified interventions compare with those of youth who are assigned to "services as usual?” This is the fundamental question for the impact evaluation. Yet, the other questions address key issues for policy and practice development in the independent living field. Our approach provides a rigorous design—random assignment to treatment and control groups combined with an intensive process analysis—for assessing impacts and for answering the other questions outlined above.
Program Selection Criteria
Considerations of evaluability and site selection were governed by an interest in representing the major categories of ILPs. Programs were selected for further examination according to the following key evaluability critieria:
Programs should be directed, at least in part, at youth leaving foster care or expected to remain in foster care until adulthood.
Programs should be innovative, of national significance, and capable of expanding into new geographic areas.
Programs should be willing and capable of participating in true experiments, involving random assignment of clients to treatment services or the usual or alternative services.
Programs should have adequate sample size and should have a need for the services greater than what is currently available so an experiment would not reduce the total number of youth served by the program.
Additional program selection criteria included:
Programs should be reasonably stable.
Programs should be relatively intensive.
Programs should have well-developed theories of intervention.
Programs should be consistently implemented across their sites.
Programs should have available data of sufficient quality to understand the flow of clients and to follow clients to assess key outcomes.
Relevant decision makers should be willing to sign on to a rigorous evaluation.
Programs should be willing to make minor appropriate changes to accommodate the research and should be able to maintain them for the full period of research.
Program Sites
Four programs were identified that met the above criteria—two sites in Los Angeles County, California (Community College Life Skills Training Program and the Early Start to Emancipation Preparation), one program in Kern County, California (ILP Employment Service Program), and one program in Massachusetts (Adolescent Outreach Program). A new site, First Place Fund in Oakland, California, is being added in order to examine a housing program designed to serve this population of foster youth. This site was not one of the original sites because at the time of the evaluability assessment, the program did not serve a sufficient number of youth. Since that time the program has expanded and now serves enough youth each year to make random assignment possible. A brief description of each program follows:
California, Los Angeles County (Community College Life Skills Training (LST) Program). This large program provides youth with classroom-based and experiential life skills training, support groups, and exposure to community college opportunities. The county’s child welfare department forwards a listing of 16 year olds to the program, and program staff connect youth with the nearest community college offering the life skills course. Outreach advisors contact youth in their homes, assess each youth and provide transportation to the courses. Independent living courses are provided in 10 sessions over 5 weeks by 19 local community colleges. The program serves approximately 600 youth each year.
California, Los Angeles County (Early Start to Emancipation Preparation (ESTEP)). The ESTEP program provides structured tutoring and mentoring for foster youth who lag one to three years behind in school. For this program, 14 year old youth are referred by the child welfare agency to the community college program. Emancipation Prep Advisors at each college provide one home visit and assess each youth’s independent living skills, administer math and reading assessments, and refer youth to a structured tutoring curriculum. Services are provided at an intensive level for one year and continue less intensively through the age of emancipation. Twelve community colleges provide the services based on zip code areas. Each year the program serves over 400 youth.
California, Kern County (Employment Service Program). Kern County’s employment program is administered through a partnership between the county child welfare and TANF agencies. Youth, aged 16-21, are provided employment skills training, job referral, and employment support based on the county’s TANF employment model. Youth are identified for the program when they turn 15 and a half years old. Approximately 200 youth are referred to the program each year. Services are provided by case managers from the county workforce development department.
Massachusetts (Adolescent Outreach Program). The Adolescent Outreach Program in Massachusetts is a statewide program for youth between the ages of 16 and 21. The program provides intensive, individualized independent living skills assessment and training to youth in out-of-home placement. Youth are most often referred by their public agency caseworker but can also be referred by their foster parent, group home staff or other caregiver. There are currently 22 adolescent outreach workers who make face-to-face weekly visits with each of the 15 youth on their caseload. Youth are served intensively for an average of six months but a large number of youth remain in the program for more than one year receiving less-intensive services. As part of the evaluation, the program has been extended to serve youth in specialized foster care.
Oakland, California (First Place Fund for Youth). First Place Fund for Youth, founded in 1998, is an Oakland, California based nonprofit program seeking to support foster youth through the transition from foster care to independent living. The target population is former foster youth from Alameda, Contra Costa and San Francisco Counties. First Place’s Supported Housing Program will be evaluated. The program serves emancipated foster youth with scattered-site housing in two-bedroom apartments in the East Bay. Participants receive housing start-up and monthly rental subsidies, weekly in-home case management, weekly life skills training, budgeting and financial planning, transportation assistance, monthly food vouchers, community-building peer events, and health advocacy.
The programs selected encompass a set of critical independent living services as substantiated by our program review and discussions with experts in the field. They represent a range of program types: a large life skills training program (LST), an educational mentoring program (ESTEP), an employment mentoring program modeled on TANF work development assistance, a mentoring/casework model that represents a more intensive model of a common service delivery mechanism appropriate for either public agencies or private contract agencies, and a housing program for emancipated foster youth. The programs selected typify services being provided to foster youth and also represent ethnic and racial diversity across sites. Members of the study’s Technical Work Group (described in detail in Section A8.) provided consultation on the types of programs selected for the evaluation. Members of the work group represent state agencies, private ILPs, as well as youth advocacy groups and have extensive knowledge of the types of ILPs currently being administered throughout the country.
In each of the program sites, administrators are enthusiastic about participation in the evaluation and all are committed to the random assignment design. In all programs not all youth eligible for or referred to the program can be served, thus making an excellent opportunity for a random assignment evaluation.
Data Collection Strategies
The study design includes data collection strategies that will provide multiple measures to understand the impacts of independent living programs on youth outcomes. Data collection will also document the operations of programs, the context in which programs operate, and the services received by youth.
Our major methods of data collection discussed in this document include:
Baseline and follow-up in-person interviews with program and control youth,
A web-based survey of caseworkers,
Program site visits including semi-structured interviews with administrators, staff and youth.
Youth survey. The questionnaire (Appendix A) will be used in interviews with youth referred to independent living services at each selected site. The same questionnaire will be used in each round, with minor variations across rounds. Youth will be interviewed whether they are assigned to the treatment or are in the “usual services” group. All youth will be interviewed shortly following referral and random assignment, with two follow-up interviews, one year and two years later. The information gathered from the youth is intended to answer the research questions identified above.
The questionnaire will need to be adapted to specific program sites. Adaptations will insert program specific names of services to ensure youth understand the questions. The sections of the questionnaire serve to identify the services received by the youth, short- and long-term outcomes, as well as moderating factors that influence the efficacy of the services received. Exhibit 1 displays categories of data collection topics (sections of the questionnaire) by their purpose for analysis. These topics primarily will be addressed in the youth surveys, but data from worker surveys will also be important on some topics.
Analytic Purposes of Questionnaire Sections
Population Characteristics |
|
Intervention and Services |
|
Moderating Factors |
|
Intermediate Outcomes |
|
Longer-term Outcomes |
Demographics |
|
IL Services of Interest |
|
Relationships |
|
Employment and Income |
|
Employment and Income |
Prior Experiences in Care |
|
Other Services |
|
Social Support
Reading Ability |
|
Education
|
|
Education |
Prior Victimization |
|
|
|
Living Arrangements
Substance Abuse |
|
Health Behaviors
Substance Abuse |
|
Physical Health
Fertility and Family Formation |
|
|
|
|
Pro-Social and Other Activities |
|
Sexual Behavior |
|
Economic Hardship /Homelessness |
|
|
|
|
Mental Health |
|
Delinquency |
|
Mental Health |
|
|
|
|
Attitudes and Expectations |
|
Mental Health |
|
Victimization |
Population Characteristics. The framework begins with the characteristics of the population of interest in each evaluation site, their demographics and fixed factors such as prior experiences in care and prior victimization.
Intervention and Services. The evaluation will test whether an intervention in the site alters outcomes of the treatment youth compared to youth receiving “usual services.” We will gather information on both the focal IL services (offered only to the treatment group) as well as other services received by treatment and control group youth.
Moderating Factors. A set of factors is expected to moderate the effects of the interventions. These are factors that are at many levels (the youth himself/herself, the family constellation, and the community). These are separated from the “characteristics” of the youth because they may change over time.
Short-term (Intermediate) Outcomes. Early data collection after the provision of the intervention will establish the short-term outcomes of treatment and control group youth. These outcomes may pick up progress on pathways to the final outcomes of interest (for example, education that will ultimately increase success in the labor market) or behaviors that affect ultimate outcomes (for example, sexual behaviors that affect fertility and health risks).
Longer-term Outcomes. The ultimate goals of the interventions are related to successful functioning in adulthood. Key areas mentioned for the evaluation in the Foster Care Independence Act include educational attainment, employment, and “personal development.” The latter includes physical health, fertility, economic hardship, mental health, incarceration, and victimization.
Caseworker survey. A second questionnaire (Appendix B) will be used with caseworkers of selected youth assigned to both the treatment and control groups. The purpose of the survey is to collect descriptive information about the foster youth, including their developmental and placement history, the services they have received, and the workers’ perceptions of youth preparedness for independence. While some of the information collected through the caseworker survey will be similar to data collected from the youth surveys, obtaining both the youth's and caseworker's views on these matters is important.
The caseworker questionnaire may also need to be adapted to the specific program. Such adaptation will insert common locally-known program and service names to ensure worker understanding of the questions. Other site-specific adaptations could include insertion of additional service categories to reflect program goals. No amended or additional questions would be of a sensitive nature.
The caseworker survey is only being implemented in the Kern County program site due to low initial response rates for caseworkers in Los Angeles County and concerns about caseworker burden in Massachusetts. The program in Oakland serves emancipated youth so agency caseworkers would no longer be a source of case information.
Program site visits. Site visits are necessary to document program activities and context. During site visits, we will hold semi-structured interviews (individual as well as group) with program administrators and managers. For youth and caseworkers, we will conduct focus groups. During the site visits, we will also observe the operation of programs serving youth in both the control and treatment groups. The site visits will be supplemented by an analysis of written documentation including program manuals, reports and curricula. Protocols for use with the various respondents are included in Appendix C. The protocols are designed to address the following topic areas:
A senior level researcher from the Urban Institute or Chapin Hall will be designated as a liaison for each site and will serve as a single point of contact for any concerns or questions about the evaluation. The liaisons will work closely with the NORC field staff to coordinate data collection. Each selected evaluation site will be assigned a NORC field manager. The field manager will maintain communication with the site liaison to avoid duplicative effort and efficiencies.
A3. Consideration of the use of improved information technology to reduce respondent burden
Both the youth survey and the worker survey make use of improved information technology to reduce respondent burden.
Youth Survey
Interviews with the youth will be conducted using computer-assisted personal interviewing (CAPI). CAPI has been shown to reduce the time required to administer a questionnaire in comparison to paper and pencil methods. Thus CAPI reduces respondent burden while also producing data that can be prepared for analysis faster and more accurately. Further, sensitive questions will use audio computer assisted self-interviewing (ACASI). Appendix D identifies the sections of the youth questionnaire that will be administered through ACASI. This method allows the respondent to listen to questions through earphones and record his/her answer on the keyboard without interviewer participation. This process helps the respondent feel more comfortable with the questions. During administration of these questionnaire segments, respondents first will be instructed on how to use the computer to enter their responses. They also will be instructed on the use of the audio headset that will allow them to hear a question read to them at the same time that the question text appears on the screen. Question response sets will also be audio as well as visual. Theoretically, the audio portion will help to improve response in situations where literacy could be a problem. During ACASI self-administration, the computer screen is not visible to the interviewer and the CAPI program automatically directs the respondent through the appropriate universe of questions. Upon ending the self-administered section, the program automatically saves the data and the interview reverts back to the next interviewer-administered module. The respondent will be reassured by the interviewer that his/her responses, once entered into the computer, are only available for retrieval by selected personnel. There is evidence that using an audio computer-assisted self-interview approach favorably affects data quality.
Caseworker Survey
The caseworker survey is administered using a web-based questionnaire. The advantages of a web-based survey are many. Unlike a phone survey, workers can access the survey at any time that is convenient to them. They can start completing the survey, stop if they do not have sufficient time, and then start up again when they have additional time. They can type in information of whatever length they desire, clarifying their responses. The survey will be linked to a database for analysis. This saves considerable cost in having to retype the data provided and also improves the data quality as mistakes will not be made inputting the data. The survey will have a tracking capacity to determine which workers have not completed the survey and we will be able to email these workers to request that they do. Workers without Internet access or who are uncomfortable with internet based surveys will be contacted by telephone. Workers will also be able to email questions or contact by telephone an Urban Institute researcher assigned to oversee the survey.
A4. Efforts to identify duplication
This data collection effort does not duplicate any other effort. This study, mandated by the Foster Care Independence Act of 1999 is the first federally-funded, Congressionally mandated study designed to assess the impact of independent living programs on youth aging out of the foster care system.
A5. Minimizing impact on small business or other small entities
No small businesses will be surveyed. Survey respondents will be program participants or state or county government employees. In some sites it is possible that caseworkers will be employed by a non-profit social services agency under contract to the state or local government.
A6. Consequences of less frequent data collection
In order to measure the effects of independent living programs on youth, the study is designed so data collection will occur at three different points in time: baseline and two subsequent time periods. Utilizing this survey method allows for a comparison of the amount of change (differences between baseline and post-treatment measures) and gives a more sensitive measure of the effects of service than just comparisons of post-treatment. Of particular importance to this population is assessing the effects of services beyond the end of programs. Therefore, it is desirable to follow-up with subjects at later points.1
The caseworker survey will be administered at similar points in time to when the youth is interviewed as long as the youth is in care. The initial baseline survey will document information about youth demographics, their developmental and foster care history, and the services received to date. The two surveys at the follow-up points will focus on current youth functioning and on services provided since random assignment. This is important to assess the types and amounts of services received by both treatment and control groups.
During the first phase of the evaluation, program evaluators will conduct on-site data gathering at each of the study sites. We expect that the interventions and the intake period will be long enough that we will need to conduct a follow-up visit to document changes in context as well as program implementation. The often dynamic political and social environments in which independent living programs operate illustrate the need to monitor the interventions carefully throughout the data collection period to document the nature and timing of important changes that could affect the evaluation and its results.
A7. Special circumstances
There are no special circumstances involving this data collection effort. Respondents will not have to report information more than quarterly, prepare a written response in fewer than 30 days, submit more than an original and two copies of any document, retain records for more than 3 years, or submit proprietary trade secrets.
A8. Description of outside consultation efforts
Appendix E contains the public announcement for this request, which was published in the Federal Register (Volume 71, Number 96, page 28869-28870) on May 18, 2006. No public comments were received during the 60 days following that announcement.
Throughout the course of the study we will solicit external input from a Technical Work Group (TWG) that was originally convened as part of the evaluability assessment. Appendix F provides a list of the TWG members and their affiliations. Members of the TWG were chosen for their knowledge of independent living services. The TWG is composed of several state level representatives and policymakers, as well as researchers. TWG members are expert in youth issues; foster care and transitions to adulthood; evaluation design, implementation, and analysis; and health, education, housing, and social services programs. They were invited to join the TWG after consultation with the National Association of Public Child Welfare Administrators (NAPCWA), the Child Welfare League of America’s Independent Living Standards Advisory Group, and the National Resource Center on Youth Services. We have used members of the TWG as consultants to the project on specific research issues, and we expect to continue to do so. Members of the TWG provided extensive review of the youth survey. Their comments and suggestions resulted in modifications reflected in the questionnaire. Richard Barth, Principal Investigator of the National Survey of Child and Adolescent Well-being—the largest survey of foster youth ever—also reviewed the youth survey instrument.
A9. Explanation of decision to provide gifts to respondents
Youth Survey
All advance materials and initial contacts with the youth will emphasize non-monetary reasons for participation such as the opportunity to share experiences; the chance to have an impact on the system in which they are involved; the opportunity to be part of an important, well-respected effort; and the satisfaction they will receive for contributing to the project. However, this survey meets several conditions generally considered sufficient to justify the use of monetary compensation. A comprehensive questionnaire is required to meet the goals of the evaluation; the project is longitudinal, requiring multiple interviews; and the potential bias from non-response is significant given the nature of the population. Further, our experience with this type of population leads us to believe that conventional means of motivation and encouragement are insufficient. With populations like the youth in this study, monetary incentives are often necessary to gain adequate levels of participation. Two federally funded studies conducted by NORC provide examples of how monetary incentives improve participation rates. In a fourth follow-up of former SSI recipients who were alcohol or drug addicted and a second follow-up of a study on drugs and problem behavior conducted with youth in Harlem, offering a monetary incentive and providing a toll free number proved successful. Over 90 percent of those who participated in the baseline interview were retained at follow up. Our budget includes $30 per youth respondent for the baseline, $50 for each follow-up interview.
Caseworker Survey
We recognize that a key challenge to completing the surveys will be the burden placed on busy frontline social services staff. Caseworkers will not be provided a stipend if they complete the survey during their work hours as approved by their agency. However, if it is necessary for them to complete the survey during their own time, gifts will be offered. It is estimated that the survey will take approximately one hour to complete. Those staff who must complete the web-based survey during non-work hours will be offered $20 per survey, their approximate average hourly salary. 2 Caseworkers will only be completing surveys for youth on their caseload. Workers will not be contacted once a youth ages out of the system.
Program Site Visit Interviews and Focus Groups
No stipends are provided to agency administrators and staff during program site visits. However, food and drinks are provided if focus groups or small group interviews are conducted during mealtimes. We have found that provision of food and drink enhances the response rate for on-site interviews and focus groups. A $25 gift will be provided to youth who attend focus groups. This amount is one we have found from prior studies to be an appropriate and effective inducement.
A10. Description of assurance of confidentiality and nature of response
A description of efforts to assure confidentiality for each of the study’s data collection efforts follows. The proposed efforts will be reviewed and approved by Institutional Review Boards (IRBs) at the Urban Institute and NORC (See Appendix G). The states and counties with which we work may also have mechanisms for review of research on clients and we will work with those organizations to seek approval of the work.
Youth Survey
It will be necessary for all three organizations involved in this evaluation to have identifying information about youth subjects. NORC requires this information to interview and track them. The Urban Institute needs the data to conduct the data collection from workers. Chapin Hall will need identifying information to access administrative data files on youth. All three organizations have local networks that are password protected and have firewalls and other protections against unauthorized access to data. Identifying information will be segregated from other data files and will be available only to those personnel needing it. Each organization will supply analytic files to analysts in the Urban Institute and Chapin Hall. Analytic files will not have identifying information and will be linkable only by a common identifier. In addition, all project staff will be required to sign a confidentiality agreement (See Appendix H).
Caseworker Survey
Caseworkers will be assigned a password that will allow them to access and provide data only for the youth for whom they have been identified as the caseworker. As an extra layer of protection, the survey will be housed on a separate computer server at the Urban Institute to create a firewall to prevent unauthorized staff or external persons from attempting to access the survey data. All caseworkers will be informed of the confidentiality of their responses for this study. Their supervisors will not have access to any of their individual responses, and their statements will never be linked to their identities in any of the reports prepared by the project.
Results from analyses of the data will not be reported unless the cell size is above a threshold of three observations to maintain the anonymity of workers and cases. The threshold cell size of three observations was determined to be sufficient after consulting Statistical Policy Working Paper 2: Report on Statistical Disclosure and Disclosure Avoidance Techniques (Washington, D.C.: U.S. Department of Commerce, 1978).
An explanation of confidentiality protections and the information needed about the youth being served will be described in an introductory letter sent to each caseworker prior to their participation in the interview (See Appendix I). In addition, the letter will provide the caseworker with information about the study and their involvement in the research process, as well as indicate which of their cases have been selected for the interview. Additionally, as part of the web-based caseworker survey, an introduction (See Appendix B) explaining the confidentiality procedures will be presented to each participating caseworker.
Program Site Visit Interview and Focus Groups
During the initial contact with selected program staff, the research team will explain the purpose of the site visit, study objectives, contact information, and a description of the use of data. Written materials on the study will be provided to participating sites. Consent will be obtained verbally. For information gathered in focus groups with youth, procedures have been established to make their individual comments anonymous. Identifiers will not be linked to individual youth providing information in focus groups. An opening script will remind youth not to talk about each others’ individual responses following the conclusion of the group interview (See Appendix C).
A11. Sensitive questions
Youth Survey
Sensitive questions are necessary to include in the youth survey for several reasons. First, as discussed in Section A1, the rationale for the Foster Care Independence Act (FCIA) is the belief that youth who age out of foster care encounter serious barriers to living self sufficiently. Literature reviews suggest that foster youth aging out of care have poor preparation and capabilities, including limited education and employment experience, relatively poor mental and physical health, and a relatively high likelihood of experiencing unwanted outcomes such as homelessness, incarceration, and pregnancy out of wedlock.3 A recently completed study of youth transitioning out of foster care in Wisconsin found that these youth are vulnerable to physical and sexual victimization, unemployment, homelessness, and incarceration.4 In addition, the FCIA mandates the focusing on such risk-related domains and specifically cites “measures of educational attainment, high school diploma, employment, avoidance of dependency, homelessness, nonmarital childbirth, incarceration, and high-risk behaviors” as outcome measures.
Thus, in order to address barriers faced by youth as they age out of the foster care system as well as address the variables articulated by the FCIA, sensitive questions are critical for this study. There are several sections in the questionnaire that may be considered sensitive items. We address below each of these and provide justification for asking about each.
In order to minimize potential risk to youth, we have included widely used questions from other surveys of youth and the foster care population. Many have received prior OMB approval or approval from state IRBs. Respondents will be told that they may decline to answer any question. For the following sections the youth will answer with audio computer assisted self interviewing (ACASI). In addition, as mentioned earlier, to safeguard youths’ privacy, we have confidentiality certificates that all evaluation staff must sign (See Appendix H).
Sexual Activity (See pages 104-107 of the questionnaire.)
We will analyze sexual behavior questions to assess the extent to which youth are putting themselves at risk of pregnancy and/or sexually transmitted infections, key outcomes of concern to Congress and program administrators. Sexual behaviors themselves can be seen as an intermediate outcome of interest, related to key family formation and physical health outcomes. Results from a number of different surveys indicate that a significant proportion of adolescents between the ages of 13 and 17 report that they are sexually active. The level of sexual activity and contraceptive use are important indicators of whether young people reach higher levels of educational and occupational attainment, and there should be significant congruence between anticipated life goals, sexual activity, and its associated outcomes.
Antisocial Behavior (See pages 99-102 and 115-124 of the questionnaire.)
By antisocial behavior we mean delinquency, criminal activity, and alcohol and drug use. The educational and labor force trajectory of adolescents is strongly affected by their involvement in delinquent and risk-taking behaviors.
Crime and delinquency. This section will capture relatively serious externalizing behavior, including risk behaviors that are key outcomes of concern to Congress and program administrators. We will also assess whether youth have been involved with the justice system. Prior research has found a high level of involvement of this population, particularly males, with the justice system. Delinquent and externalizing behaviors are an intermediate outcome, related to key long-term outcomes such as incarceration and employment.
Substance use. Substance use will be an important outcome to measure related to adult physical health, as well as a moderating factor affecting education and employment. In the baseline we will establish current usage and intensity of use of alcohol, marijuana, cocaine, and other street drugs. For each of these we will measure age of initiation, an important indicator of long-term outcomes. For alcohol, we will measure binge drinking, an early form of alcohol abuse. For a larger set of drugs, including many commonly used among the young (e.g., Ecstasy, inhalants), we will collect any usage in the past twelve months. We also will ask if they have received any treatment for an alcohol or drug problem and where it was received.
Religion (See page 50 of the questionnaire.)
Religion and spirituality are an important part of life for a majority of Americans. Belief systems affect a wide variety of outcomes relevant to labor market participation, ranging from the type and intensity of work and career orientations, to labor force participation and other economic outcomes that influence social and economic mobility. Religious denomination and frequency of attendance also indirectly affect key long-term outcomes through their impact on other dimensions of individual lives.
Income, Assets, and Program Participation (See pages 125-130 of the questionnaire.)
One of the most important outcomes is the youth’s economic well-being. Employment and income are key outcomes specified in the FCIA legislation. Asset development is also important for the successful transition to adulthood. Whether these youth can support themselves and avoid economic hardship is critical. We will measure basic types of income such as labor earnings, and also seek information on income gained from illegal activities and/or the underground economy. We will also ask questions about receipt of various government transfers (e.g. TANF, Food Stamps, WIC). Questions on government transfers will not be asked until the youth is 18.
Foster youth are not expected to have much in the way of assets. However, successful transition into independent living will involve learning to save and accumulate some wealth. For youth 18 and older we will ask about amounts of commonly held assets including checking accounts, savings accounts, and vehicles. We will also ask about one type of liability, balances on credit cards.
Economic Hardship (See pages 131-138 of the questionnaire.)
Measuring homelessness is also identified in the FCIA legislation. We will attempt to capture a detailed description of where the youth has been living since leaving foster care. Running away and unstable housing are also moderators that put youth at greater risk of homelessness as well as other negative outcomes. Food security is an important outcome measure indicating how well the youth is coping with living independently. Economic hardship may show itself in other forms, as well, such as problems paying utility and other bills. Most of the questions in this section will not be asked of the youth until they are age 18.
Victimization (See pages 108-114 of the questionnaire.)
Foster care youth entered the child welfare system generally as a result of abuse or neglect. This section covers a range of victimization by caregivers and others before the youth’s first foster care placement, including neglect, physical abuse, and sex abuse. It also covers criminal victimization such as robbery and battery.
Assessing prior victimization is important for two reasons. First, youths’ history of victimization may be related to their ability to achieve self-sufficiency outcomes, serving as a mediating influence. In this sense, it may be a predictor of later outcomes. Second, the ability to avoid victimization is an important outcome of interest. Assessment of victimization is problematic for youth who are still in out-of-home care. We will assess victimization that took place prior to out-of-home placement because evidence suggests that this will be a more common experience than victimization while in care. This approach also minimizes the prospect of disclosure of current events requiring reporting to child abuse and neglect authorities.
Caseworker Survey. Questions contained in the caseworker survey are not of a sensitive nature.
Program Site Visits. Questions contained in the site visit protocols are not of a sensitive nature.
A12. Estimates of respondent burden
The proposed data collection effort includes three groups of respondents: (1) youth receiving the treatment and youth in a control group who both will respond to an in-person survey; (2) caseworkers who will respond to a web-based survey; and (3) ILP and child welfare administrators, staff and youth in the selected program sites who will participate in semi-structured interviews and focus groups.
Exhibit 2 provides total burden estimates by providing estimates of the number of respondents and response times for each respondent group. The youth survey averages 90 minutes. The caseworker survey averages 60 minutes per worker. Individual and group interviews and focus groups will be scheduled for 60 minutes.
Exhibit 2
Annual Respondent Burden
Instrument |
Number of Respondents |
Number of Responses Per Respondent |
Average Burden Hours Per Response |
Total Burden (hours) |
Ongoing Study Sites |
||||
Baseline: |
||||
Youth interview |
98 |
1 |
1.5 |
147 |
Caseworker survey |
4 |
19 |
.5 |
38 |
First Follow Up: |
||||
Youth interview |
177 |
1 |
1.5 |
265.5 |
Caseworker survey |
4 |
36 |
.5 |
72 |
Program site visit |
50 |
1 |
1.5 |
75 |
Second Follow Up: |
||||
Youth interview |
370 |
1 |
1.5 |
555 |
New (5th) Study Site |
|
|
|
|
Baseline: |
||||
Youth interview |
250 |
1 |
1.5 |
375 |
Program site visit |
80 |
1 |
1.5 |
120 |
First Follow Up: |
||||
Youth interview |
213 |
1 |
1.5 |
319.5 |
Program site visit |
50 |
1 |
1.5 |
75 |
Second Follow Up: |
||||
Youth interview |
200 |
1 |
1.5 |
300 |
|
||||
Estimated Total Burden Hours |
2,342 |
|||
Estimated Annual Burden Hours (average over 3 years) |
780 |
Total number of youth responding to the youth survey assumes some attrition—10% attrition for LA County ILP, 15% attrition for the other three program sites—for each subsequent round of data collection. The lower attrition rate for the LA County ILP is due to the relatively short duration of the program—a five-week period. Youth served by the other programs are served for much longer periods of time allowing for greater attrition.5
A13. Total annual cost burden to respondents
There is no start-up cost incurred by survey respondents, nor any ongoing actual financial cost.
A14. Estimates of annualized costs to the federal government
The estimated costs to the government (over 8 year project period) of completing the existing data collection and expanding to another program site is $9,271,243. A task-by-task budget, which includes the budgets for the youth survey, caseworker survey, program site visits, and data analysis and reporting is outlined in Exhibit 3.
A15. Explanation of reasons for program changes or adjustments (in Items A13 and A14)
Exhibit 2 estimates respondent burden for data collection activities that remain to be completed in the four ongoing study sites and for data collection activities in the fifth study site. The estimated annual burden hours is 780 hours (a change of -3,020 hours from the current inventory of 3,800 hours). The annual number of responses is estimated to be 569, a change of -3,931 from the current inventory of 4,500.
A16. Tabulations, Statistical Analyses, and Publication Plans
Tabulations and Statistical Analyses
Figures 1 and 2 provide the framework for analysis for both the impact and process evaluations. The youth survey and worker survey provide data for the impact evaluation. Process evaluation data will be collected during program site visits conducted during the first and third years in which the study takes place at each site.
Process Component. The process analysis plays a key role in documenting the nature of the interventions, interpreting the findings of the impact analysis, as well as suggesting directions for refining the analysis of the outcome study. The proposed approach is based on a formal process analysis framework refined through numerous studies conducted by the Urban Institute over the years. Process analysis, as it has evolved in the methodological literature and as part of program evaluations, examines how and why policies are carried out in a certain way. The intent is to understand the factors that influence the way programs are structured, organized, and managed, and what effects program operations, decisions, and management have on outcomes. These two related types of knowledge are then used to identify the consequences of implementing policies or programs in various ways or under various economic, political, or organizational situations. Such information is also used to provide recommendations for improving existing programs or transferring program concepts and designs to other sites.
Process data analysis will seek to address the key themes outlined in the conceptual framework (Figure 1). These include how organizational issues related to the child welfare and independent living service delivery system may have affected the intervention, how the services youth received varied, and how contextual factors outside of the child welfare system affected the intervention or the outcomes youth achieved. In addition, analysis will document differences in opinions voiced by staff and youth. While each site in the study and its intervention may be unique, the process data analysis will examine themes across sites as appropriate.
Implementation of the interventions must be viewed as occurring at multiple levels. Moreover, actions that occur at one level affect those that occur at other levels. We have identified several levels in which decisions will be made, and therefore need to be studied to understand implementation. These levels, ranging from macro-level federal changes, to more micro-level youth characteristics, all provide information on the environments of the interventions, the mechanics of implementation, and resulting effects at all levels of policy and practice.
All information collected through the process analysis will be coded and entered into a qualitative content analysis database (using Nud*ist software). Coding text will allow us to pull together disparate information on related topics for analysis, both within and across sites. Whether we collect data from interviews, focus groups, document review, or observations, similar information will be coded the same and will be analyzed together. The process study will also use descriptive statistics to identify patterns and trends in quantitative data on resources, staffing, activities, and the frequency, duration, intensity, and other characteristics of services provided during the study period.
ILP Casework, Education,
Housing, and Economic Security Program (Los Angeles County –
ESTEP, Kern County Employment, Massachusetts Adolescent Outreach,
and Oakland First Place Fund)
Impact Component. The overall goal of the impact evaluation is to compare, within each of the five sites, the effectiveness of an experimental, innovative ILP service (or package of services) with a "standard" service (or package of services). Our conceptual framework for the impact evaluation is illustrated in Figure 2. The evaluation will employ randomized field trial designs in each site with pre-test and multiple post-tests in which youth will be followed over time. As discussed earlier in Section A.2, the five programs selected represent a range of program types. The Los Angeles Life Skills Training program represents a curriculum-based life skills program. The four remaining programs all represent programs that are casework, employment, housing, or education based. We will randomly assign cases to either the treatment or control group prior to the beginning of treatment, the random assignment assuring that the groups are stochastically equivalent in the distribution of predispositions to various outcomes. We can then compare the groups on outcomes with a much reduced chance that differences are due to selection factors or dynamics other than the effects of the service. We will take account of random differences between groups in the statistical analysis of outcomes. We will measure outcome variables both before and after service, so as to allow for the comparison of groups on amount of change (differences between baseline and post-treatment measures usually give somewhat more sensitive measures of effects of service than just comparisons of post-treatment status, since they "control" for baseline differences). Since it is hoped that the effects of the independent living services last beyond the end of programs, we will follow-up with youth at points beyond the end of treatment service provision.6
All analyses will be initially undertaken for each site separately. Later analyses may combine sites in which there are similar programs and the data lend themselves to such analyses. The analysis of quantitative data will proceed in three steps:
Description of the youth.
Description of the services provided to individual youth.
Assessment of impacts.
Descriptions of youth
Within each site, the youth will be described in terms of their characteristics at the time of random assignment. The description will include demographic characteristics, educational attainment, employment history, child welfare histories, other social service history, the problems they face, and their aspirations. Since it is expected that random assignment will
produce experimental groups that are similar on these characteristics at the time of random assignment, the groups will be combined for this description. However, random assignment is likely to result in groups that differ significantly on at least some variables (simply by chance), so analyses of differences on these characteristics will be conducted and significant differences reported. Data for these analyses will come from initial interviews with youth, the initial data collection from workers, and administrative data. The analyses will involve frequency distributions, means, crosstabulations, and comparisons of means.
Analyses of services provided
An analysis of services provided to the treatment and control groups will be undertaken to determine whether the groups differ on levels and kinds of services provided and whether the services provided to the treatment group conform to the model of service in the site. We will also be interested in examining the services youth receive after the end of the ILP programs of interest in the experiment. One outcome of program participation may be increased connections to other services. Data for the services analysis will come from interviews with youth at the two follow-up points, the surveys of workers at the follow-up points, and administrative data. Analyses will involve both the examination of individual services and scales or counts of groups of individual services. Amounts of service will be determined in several ways, depending on the character of the service: the length of time the service was provided, numbers of times the service was provided, and the amount of time devoted to the service (e.g., hours). Analyses will involve crosstabulations of experimental group with receipt of particular services and comparisons of means where services are measured on equal interval scales.
Analyses of impacts
The site-specific (i.e., within site) analyses to address each outcome research question listed previously will be conducted in stages, initially using a “intent-to-treat” approach to random assignment. Thus, youth in each condition will be retained in the longitudinal analyses based on their initial group assignment and without regard to the amount of ILP services received during the study period. Data for these analyses will come from interviews with youth, the surveys of workers, and administrative data.
First, we will assess distributions of the outcome measures and important correlates, by experimental group. These analyses will allow us to check for outliers and assess the validity of distributional assumptions underlying some of the proposed statistical techniques. We will also assess attrition at each follow-up point and make preliminary estimates of its effects. We will then move to a series of bivariate and multivariate analyses designed to answer the impact research questions for each site.
Primary Analysis. First, a series of bivariate analyses will be conducted to determine whether there is a statistically significant “main effect” for treatment (treatment vs. control) for each of the key self-sufficiency and well-being outcomes at both follow-ups—independent of youth demographic, personal history, and psychosocial variables. Additional bivariate analyses will assess the “main effects” of salient demographic factors (e.g., gender), personal history, and psychosocial variables—each independent of all other variables including treatment vs. control group assignment. Analyses of covariance (ANCOVA) will then be employed to examine jointly the effects of experimental group membership and important covariates. Separate analyses will be conducted on data from each of the follow-up data collections. Analyses will be either ordinary least squares or logistic regressions, as appropriate.
These separate analyses of the two waves of follow-up data will result in an inflation of the Type I error rate (i.e., the probability of asserting that a difference exists when it does not exist in a population). Consequently, our major interest will be outcomes and changes in outcomes at the final follow-up. Decision levels for the statistical tests conducted at the first and second (final) follow-up could be adjusted to deal with this problem. However, a better way to pursue the analysis of outcomes over time, while controlling for Type I error, may be to employ a repeated measures MANOVA involving all three waves of survey data in a single analysis. The repeated measures model is likely to be less powerful than the ANCOVA model, but will have the advantage of being able to accommodate all three waves of survey data simultaneously. In a simple repeated measures analysis, three hypotheses may be tested: differences across time in the combined groups, differences between groups in average levels across time, and the interaction between time and experimental group membership. The primary interest is in the interaction hypothesis (whether groups differ in the shapes of the curves representing levels of the outcome
variables over time). We may also employ multivariate repeated measures analyses, involving more than one dependent variable (allowing for even more control over Type I error that arises from examining multiple outcomes).
Supporting Analysis. Repeated measures MANOVA approaches will also be used to assess the effects of the hypothesized classes of treatment, moderating, and mediating variables on key self-sufficiency outcomes. For the focus on “What works and for whom?” the potential moderating effects of demographics, personal history, or baseline risk status will be tested by adding terms to the models to capture the interaction between experimental condition and other variables, if warranted by initial analyses of contingency tables.
Other analyses may be used to examine change over time. Since there will be three measures for some of our outcome variables, hierarchical linear modeling (HLM) would provide growth curve analyses and piece-wise regressions determining the effects of multiple independent variables on change. HLM is also more tolerant of missing data than repeated measures analysis.
With regard to the analysis of effects of variations in kind and amount of services provided, it should be noted that we will encounter a significant selection problem here. There will be reasons for the provision of more or less services; this will not be random. Hence, it would be useful to model the selection through techniques such as two-stage least squares and its generalization to categorical level data, on the assumption that adequate instruments can be found in the dataset for that modeling.
Finally, it will be important to conduct a number of other analyses. The “intent to treat” framework is a strict one, since it includes in the treatment group cases that do not receive the service or receive small amounts of service and in the control group cases that do receive the service. This will appear unduly harsh to program officials and practitioners, so we will conduct analyses using corrections for selection into services, recognizing that those analyses are probes of the data and are not as rigorous as our primary analyses. Sensitivity analyses will also be pursued, to test the possible effects of violations of the experimental design. For example, we will make assumptions about how violations and minimal service cases might have turned out if they had received the intended treatment, seeing how the treatment and control groups would have differed under those assumptions.
Power Analysis. The power of a study design is the probability of detecting a “real” difference between two (or more) groups on outcomes of interest. Power is a function of several variables or factors: the analytic technique, sample size in each group, Type I error rate (the probability of asserting that a difference exists when it does not exist in a population), directionality of the hypothesis, and effect size (the difference in the average values of the outcome in each group, divided by the common variability around the average values [standard deviation] of the groups). Traditionally for experiments in the behavioral sciences, the Type I error rate is set at 5% (i.e., alpha=.05). The effect size that one wants to be able to detect must be designated, based on one's knowledge of the interventions and what can be reasonably expected of them. Cohen (1977) designates an effect size (ES) of .2 as "small" and .5 as "medium."7 Finally, Type I error rates can be based on either a directional hypothesis (i.e., the hypothesis that the treatment group does better than the control group) or a non-directional hypothesis (i.e., the hypothesis that the groups differ, either the treatment or control group doing better).
Sample size (and, to a lesser extent, allocation proportions in the groups) determines the "power" of the experiment to detect differences between groups in outcomes, given a particular analytic technique, alpha level, and effect size. The bigger the sample, the more likely real effects (if they exist) will be found. We anticipate there will be a sample size of 250 per site with 125 youth in each group in three sites and 450 per site with 225 youth in each group in the other two sites. We expect a 10% attrition at the time of the first interview, so we will randomly assign a sample 1/9th larger than that desired for the first interview. (Sample selection is described further in Section B1.). Furthermore, we are assuming an average sample loss of 15% at the first follow-up and another 5% of the baseline population at the final follow-up interview, so that the final interview sample size is anticipated to be 80% of the baseline.8
Our power calculations are also based on a directional hypothesis (i.e., alpha=.05, one-tailed). For simplicity, our power analysis is based on an independent t-test for differences in means (such as might be used for equal-interval variables, like wages) and a differences in proportions
test (e.g., differences in numbers of youth employed). The powers of t-tests for directional hypotheses (alpha=.05) for various effect sizes are shown in Exhibit 4.9
Exhibit 4
Power For Comparisons Between Means (alpha=0.05, directional)
Sample size, each group |
Effect size |
||||
.1 |
.2 |
.3 |
.4 |
.5 |
|
225 |
.28 |
.68 |
.94 |
.99 |
.99 |
200 |
.26 |
.64 |
.91 |
.99 |
.99 |
175 |
.24 |
.59 |
.88 |
.98 |
.99 |
150 |
.22 |
.53 |
.83 |
.97 |
.99 |
125 |
.20 |
.47 |
.76 |
.93 |
.99 |
100 |
.17 |
.41 |
.68 |
.88 |
.97 |
For subgroups of: |
|
|
|
|
|
80 |
.16 |
.35 |
.60 |
.81 |
.93 |
50 |
.13 |
.26 |
.44 |
.63 |
.80 |
40 |
.11 |
.22 |
.38 |
.55 |
.72 |
As can be seen, adequate power, usually considered to be .8 or higher, is reached for effect sizes of .3 and higher for samples of 150-225. Samples of 100 or more achieve adequate power for effect sizes of .4 or higher.
The power to detect differences in percentages is shown in Exhibit 5. Since the power of differences in proportions also depends on where along the continuum between 0% and 100% the difference occurs, three possibilities are shown, for differences centered on 15%, 30%, and 50% (because power is symmetrical around 50%, a difference between 80% and 90% is the same as that between 20% and 10%). For each of these possibilities, the power is shown for each of two magnitudes of difference, 20% and 10%, using directional tests of hypotheses with alpha=.05:
Exhibit 5
Power For Comparisons Between Proportions (alpha=0.05, directional)
Sample size, each group |
Percentages in each group |
|||||
20% differences |
10% differences |
|||||
|
5 vs. 25 |
20 vs. 40 |
40 vs. 60 |
10 vs. 20 |
25 vs. 35 |
45 vs. 55 |
225 |
.99 |
.99 |
.99 |
.91 |
.75 |
.69 |
200 |
.99 |
.99 |
.99 |
.88 |
.71 |
.64 |
175 |
.99 |
.99 |
.99 |
.84 |
.66 |
.59 |
150 |
.99 |
.99 |
.97 |
.79 |
.60 |
.54 |
125 |
.99 |
.97 |
.94 |
.72 |
.54 |
.48 |
100 |
.99 |
.94 |
.89 |
.64 |
.46 |
.41 |
For subgroups of: |
|
|
|
|
|
|
80 |
.98 |
.88 |
.83 |
.56 |
.40 |
.35 |
50 |
.90 |
.72 |
.65 |
.41 |
.29 |
.26 |
40 |
.83 |
.64 |
.57 |
.35 |
.25 |
.23 |
As can be seen, adequate power (.8 or higher) is reached for differences of 20% for sample sizes of 80 or more. Differences of 10% vs 20% are adequately detected for samples of 150 or more.
It is also desirable to be able to examine the effects of services on subgroups of youth, so we have shown in the above tables the power of tests for representative subgroup sizes. Of course, the power for samples of this size is lower, such that only large effects are likely to be detected.10
Publication plans. Exhibit 6 provides the schedule for data collection and final report preparation. As shown, interim reports will be completed by December 2009 and the final reports, compiling findings from all study components for each of the program sites will be completed by December 2010.
Data Collection Time Schedule
Task |
Date |
Youth Survey |
August 2003 – August 2010 |
Caseworker Survey |
August 2003 – March 2008 |
Program Site Visits |
August 2003 – January 2008 |
Interim Reports |
May 2006 – December 2009 |
Final Reports |
December 2007 – December 2010 |
Interim reports. Interim reports will be developed that contain background information on the program site, including a description of the independent living program and its objectives, and a discussion of the objectives of the evaluation and the research questions. These reports will contain a description of the study design, including measures and data collection methods.
The results of analysis of the process data will comprise a major part of the interim reports. We will provide a thorough description of site contexts and interventions. Discussion will lay out the program theories and planned activities (“theories of change”) and indicate how actual program operations compare to intent. We will discuss how any discrepancies between intent and reality may affect program outcomes. Issues in local implementation, including relationships between the public agency and private providers and among private providers, as well as other issues in the local social service system will be identified.
The reports will include a description of the youth served (based on first round interviews with youth and workers and administrative data) and how that compares to the population of youth in the site. The process of identifying youth for programs will be described and issues in targeting identified.
Impact analysis reports will include comparisons between treatment and control groups. Analyses as described above will be reported. Comparisons will first be done of differences between groups in services provided, to check assumptions about differential activities and program integrity. Comparisons of the groups on outcomes will be made, with refinements introducing control covariates and subgroup analyses. The reports will include a discussion of threats to internal validity and of effects of the experiment on program operation.
All of the analyses will be reported separately by site. Integration of the site analyses will be primarily qualitative, although we will explore analytic integration of some data. The interim reports will contain a thorough discussion of the implications of the results to date including the extent to which they are generalizable. As appropriate, we will make preliminary recommendations for policy, program planning, and practice.
Final reports. Final reports will cover all of the topics included in the interim reports, updated through the end of the data collection. These reports will cover the findings of all aspects of the project in each site. As such, they will provide a definitive description of program operations and impact. They will also include a discussion of other relevant evaluations of independent living programs and how the results of those studies compare to ours.
The final reports will discuss somewhat more extensively issues of threats to both internal validity and generalizability, detailing sensitivity analyses testing the extent to which any violations of design specifications are likely to have affected the results. The final reports will revise and extend the interpretations of the results presented in the interim reports. Our recommendations for policy, program, and practice at this point will be more extensive than in the interim reports, drawing on the full range of the extensive data collected in this study, as well as other information available (from other evaluations, etc.).
Briefings. Briefings on the final report will be held in Washington for HHS staff and other audiences as requested by HHS. The final report will be available in printed form and on a government website. The Urban Institute and Chapin Hall websites will have links to the report. A range of dissemination activities, including the production of brochures containing brief summaries for various audiences and press kits drawing attention to the major findings and suggesting story topics will be provided. Senior members of the evaluation team will be available to the media and legislative committees. Special attention will be paid to informing the media and public officials in the study sites.
A17. Approval to not display the OMB expiration date
The OMB approval number and expiration date will be displayed on all survey instruments and discussion guides.
A18. Explanation of each exception to certification for Paperwork Reduction Act submissions
There are no requested exceptions to the certification in Item 19, “Certification for Paperwork Reduction Act Submissions,” of OMB Form 83-I.
References
Collins, M. E. (2001). “Transition to adulthood for vulnerable youths: A review of research and implications for policy.” Social Service Review 75(2): 271-291.
Cook, R., Fleishman, E., & Grimes, V. (1991). A national evaluation of title IV-E foster care independent living programs for youth: Phase 2. Final Report, Volume 1. Rockville, MD: Westat, Inc.
Courtney, M. E., Piliavin, I., Grogan-Kaylor, A., & Nesmith, A. (2001). “Foster youth transitions to adulthood: A longitudinal view of youth leaving care.” Child Welfare 80(6): 685-717.
Dworsky, A., & Courtney, M. E. (2001). Self-sufficiency of former foster youth in Wisconsin: Analysis of unemployment insurance wage data and public assistance data. Madison, WI: University of Wisconsin, Institute for Research on Poverty. Available: http://aspe.os.dhhs.gov/hsp/fosteryouthWI00/index.htm.
Festinger, T. (1983). No one ever asked us: A postscript to foster care. New York: Columbia
University Press.
Goerge, R. M., Bilaver, L., Lee, B. J., Needell, B., & Brookhart, A. (2001). Employment outcomes for youth aging out of foster care. Chicago, IL: Chapin Hall Center for Children at the University of Chicago.
McDonald, T., Allen, R., Westerfelt, A., & Piliavin, I. (1996). Assessing the long-term effects of foster care: A research synthesis. Washington, DC: Child Welfare League of America.
B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS
B1. Description of the potential respondent universe and sampling
Data for the impact evaluation will come primarily from interviews with youth and will be informed by the process study data collection including the information collected from caseworkers or other workers knowledgeable about the youth's functioning and services provided, as well as any available administrative data on services and functioning. As indicated above, three interviews using a single instrument are planned with youth. The baseline interview will provide basic demographic data, information about the youth's past and present involvement with the child welfare system, baseline data on outcome measures, and information about positive youth development. The follow-up interviews will elicit data on services provided and on current status of outcome measures.
The evaluation will involve a randomized experimental design to assess program impacts. The design assumes that there will be a sufficient number of youth who are eligible for services but not receiving them to form randomly selected treatment and control groups at each of the five selected sites. In other words, some sort of a rationing process is already in effect at the evaluation sites. The procedure we propose will replace the existing rationing process with a random one in which youth are randomly assigned to a “treatment” or a “services as usual” control group. The flow of foster youth into independent living services at any given site is generally not large enough to warrant sampling; rather, the entire universe of youth entering the programs during the intake period will be included in the evaluation. The time allowed for intake will vary so we achieve appropriate numbers.
In the Life Skills Training site, we randomly assigned 598 youth to either a treatment or control group, of which 482 were deemed in scope. Of these, we interviewed 467 at the baseline (a response rate of 97%)and 427 at the first follow-up (a retention rate of 91%). In the ESTEP site, we randomly assigned 529 youth, of which 466 were deemed in scope. Of these, we interviewed 445 at the baseline (a response rate of 95%)and 417 at the first follow-up (a retention rate of 94%). In the Massachusetts, Kern County and Oakland, California sites we plan to randomly assign 278 youth to treatment and control groups, expecting to complete baseline interviews with 125 youth in each experimental group. Thus, a total of approximately 1,650 interviews will be completed across the five program sites.
To make the assignments, we will rely on NORC’s computerized random assignment system, which randomly assigns individual cases to conditions almost instantly.11 The system, which employs a “random number generator,” scientifically assigns individual cases on a case-by-case basis. The random number returned by the system is automatically converted into a group assignment number. As part of the process, it keeps a running record of the number of assignments made and notifies the operator when the designated sample size for the design has been reached.
As part of the assignment procedure, the system generates a record for each case that includes a master NORC identification number, a group assignment indicator, a date stamp for the time of assignment, and any background information supplied to the system by the operator. Once generated, the case record may be automatically fed into NORC’s central case management system and used for locating, tracking, mailing, collecting additional information, and other survey purposes. The random assignment program will also check for duplication of cases. We have found that workers or screeners will try again if the case is assigned to the control group. The system will detect if a case has been previously assigned and reject the current entry. In these situations, the site representative will be notified for further action.
A statistical assistant will be available to operate the system and assign youth to experimental groups upon request until a sufficient number of cases have been assigned to the treatment and control conditions in each of the four evaluation sites. The assignment procedure we propose assumes that each program evaluation site will designate an individual authorized to screen potential referrals for the evaluation (referred to here as the "screener"). When a screener wishes to obtain an assignment (treatment or services as usual) for a youth, he or she will contact the statistical assistant by calling a toll free number at NORC. The statistical assistant will verify the youth’s eligibility for the program and collect basic background information on the youth from the screener (e.g., name, address, phone number, age, sex, responsible caseworker, site identification number) for purposes of conducting the interviews of youth and their caseworkers.
In the case of programs that offer services to youth before and after they leave the foster care program, it may be necessary to collect additional information to determine the youth's status in the program (e.g., a current foster child, about to be emancipated, no longer a foster child but still receiving services, and so on). During the call, all required information will be entered into the assignment system as the screener supplies it. When the system returns the youth’s assignment, the statistical assistant will inform the screener of the result. The result will automatically be fed into NORC’s case management system. At dial-up the Field Manager will be notified of the new assignment. The Field Manager will call the screener, usually the following day, and confirm the assignment. Should the screener not record the proper assignment, the Field Manager will notify the appropriate site representative from the research team. This procedure will also be followed if the Field Manager discovers a violation of the assignment at any time during the course of the evaluation.
Once a sufficient number of cases has been assigned in this way, the contact person at the evaluation site will be informed that new referrals are no longer needed for the evaluation study. In an effort to minimize the potential impact of the evaluation study on the referral process, the statistical assistant will not inform the screener of the number of referrals to date until the desired sample size has been reached.
As part of the survey, we will collect information on services received and rendered, as well as length of treatment, to assess whether the assignments and experimental conditions were implemented as intended. In addition to questions asked of the youth and their caseworkers, we will obtain lists of cases receiving treatment from all service providers and compare them against lists of cases assigned to the treatment and control conditions. In the analysis of these data, we will be looking for:
cases that were switched from one condition to another; for example, a control group case that received treatment services or vice versa (“violations”),
cases receiving services other than those intended. For example, a treatment case that received fewer services than other treatment cases, a control case that received more than “services as usual,” and
cases that were referred for and received treatment services during the assignment phase of the evaluation period without going through the random assignment procedure (“exceptions”).
Once identified, we will collect as much information as possible on the exceptions to determine how they may differ from cases participating in the study. As a whole, this information will enable us to evaluate the integrity of design and the extent to which exceptions may affect the generalizability of the results from the evaluation studies.
Caseworker survey. There is no sampling design employed for the caseworker survey. Workers will be selected to participate in the survey based on having a selected youth on their caseload.
B2. Procedures for collection of information
Youth Survey
In-person interviews with youth will be conducted by experienced NORC field staff using the instrument. NORC will assign a Field Project Manager to oversee all field activities across the five sites. A Field Manager will be assigned to each site to coordinate project activities with local program and child welfare agency staff and to supervise interviewing activities. The Field Manager will be local in order to provide a presence at the site and easy access for site personnel. He/she will develop and maintain a relationship with the key agency and program staff and work with the site evaluation representative, ensuring that study procedures are followed and that access to tracking information is provided.
Most youth will still be in foster care at baseline, making them relatively easy to find and interview. We will interview them wherever they feel most comfortable. We expect most will be interviewed in their foster homes; however, in other studies involving adolescents we find that interviews are frequently conducted in such places as libraries, coffee shops, and restaurants.
All rounds of data collection will be conducted primarily in-person using a Computer-Assisted Personal Interviewing (CAPI) application. Telephone interviewing will be used for follow-up interviews only if that is the only way to complete the interview. Most likely, we will attempt telephone interviews when youth have moved out of the local site area, allowing us to avoid incurring additional travel expenses. When the interview must be conducted by telephone, the interviewer to whom the case was originally assigned will conduct the interview and enter response data directly into the CAPI application during the interview.
Many NORC interviewers have had experience on projects dealing with special populations such as drug abusers, the homeless, prisoners, the elderly, and the terminally ill. They also have experience working with school authorities, government departments, welfare agencies, prisons, and drug treatment centers. In addition to experience working with adolescents, these other experiences will form the basis for our hiring criteria for the project. As experienced interviewers, those selected will already have demonstrated such key characteristics as professional attitude, team orientation, and organizational skills. Considering the large number of studies using CAPI, they are also likely to have already gained experience at administering CAPI interviews.
Field interviewers will be trained by the Field Manager supervising each site. All NORC Field Managers will have had project training experience. Since we intend to hire only experienced interviewers for this project, all interviewers will have previously undergone NORC general training as well as other project trainings. NORC requires all new interviewers to receive 7 1/2 hours of general interviewer training. This training covers such topics as approaching the respondent and gaining cooperation; confidentiality; motivating the respondent through privacy, pacing, professional level of rapport, non-judgmental responses, and control of the interview; probing techniques; preventing verbal and non-verbal bias; and interviewing with CAPI.
Ongoing training and fidelity checks go hand in hand. Each interviewer’s first two cases will be carefully edited and feedback immediately given to ensure that every detail and procedure covered in training has been included in their work. As the field period progresses, cases will be randomly selected to assess quality. In addition, features in the CAPI Program and CM-Field (NORC’s case management system) allow for constant quality checks during the field period. Completed questionnaires can be analyzed and Call Record insertions and the like can be monitored on a daily basis.
Locating tasks for the follow-up interviews will be substantial, as the youth population is likely to move numerous times throughout the evaluation timeline. As a result, special efforts will be made to keep track of the youth from their baseline interviews through their second follow-up interviews. At the baseline interview, the youth will be asked to provide their current contact information, as well as the contact information of two people who would be able to identify their whereabouts if contacted by NORC. We will collect any specific identifiers that might be useful in tracking the respondent, particularly their social security number. We will also employ other methods of locating respondents, such as Internet directory searches, Directory Assistance, and other information made available from the local program or the public agency that administers the local program. As youth move out of the foster care system, they are likely to make use of public services such as TANF and Medicaid. We will seek use of these administrative data for locating purposes. All of the locating information collected will be stored in the CAPI instrument and tracked using the CM-Field.
Caseworker Survey
The Urban Institute will implement the web-based survey of caseworkers. Caseworkers will be assigned passwords that will allow them to access and provide data only for the youth for whom they have been identified as the caseworker. While access to the web is not universal, we have found that the vast majority of child welfare workers do have access and prefer this type of survey over a phone or written one. For workers who do not have convenient web access, we will provide a paper and pencil version.
Workers can access the web-based survey at any time that is convenient to them. They can start completing the survey, stop if they do not have sufficient time, and then start up again when they have additional time. They can type in information of whatever length they desire, clarifying their responses. The survey will be linked to a database that will be used to analyze the data which will save cost and also improve data quality. Workers will be able to e-mail questions or contact by telephone an Urban Institute researcher assigned to oversee the survey.
Program Site Visit Interview and Focus Groups
During the first phase of the evaluation at each study site, process evaluators will conduct on site data gathering. In three sites, we expect that the interventions and the intake period will be long enough that we will need to conduct a follow-up visit to document changes in context as well as implementation. We will conduct interviews with a wide range of local officials and staff who are either overseeing, participating, or affected by the interventions. We plan to interview administrators from the public child welfare agencies, private agencies providing services to youth in the control and treatment groups, and other agencies that may refer youth for ILP services. Where appropriate, we will supplement the semi-structured interviews with focus groups of public and private agency front-line social services staff, and youth.
The length of the site visits will depend largely upon the complexity of the site (e.g., how many different ILP providers there are and how many locations are included in the study) and the complexity of the intervention being studied (e.g., whether the program provides one main service or a host of services). We anticipate that a team of two researchers (with at least one senior researcher assigned to each site) will spend approximately eight days on site.
All interviews will be semi-structured in order to ensure that the same information is consistently gathered in different sites. Group interviews will also serve as the data gathering technique when interaction among respondents is desired; for example, to allow caseworkers to compare and contrast experiences. Separate interview protocols will be developed for each type of respondent or group interview. Protocols will be linked directly to the research questions and evaluation plan to ensure that all and only necessary information is collected. All interviews will be strictly confidential. Analysis reports will not identify any respondents by name.
In semi-structured interviews, child welfare administrators and program directors will be asked to address key policies and policy changes, financing, staffing, and interagency collaboration issues. They will document decision-making about the intervention, how decisions have been communicated to staff, and how implementation of the intervention is monitored. Interviews and focus groups of caseworkers (direct service and referring workers) will document factors influencing case-level practices and decisions including acceptance of the intervention goals and procedures, changes in their caseload size and demand, availability and access to needed services not provided directly, and the extent to which the ILP service delivery system has changed. We also will conduct focus groups with youth not part of the study population but being served by the ILP program. These focus groups will focus on the youths’ perceptions of their needs; their expectations for the future; the types of services or supports they have requested, been offered, and received; their motivation for accepting or not accepting services; and their satisfaction with services they did receive.
Officials outside of the child welfare system can provide important contextual information and a different perspective about factors that may be affecting the success of the intervention. We will interview a variety of community services providers, local advocates, members or representatives from other relevant task forces and planning boards, and other key local stakeholders.
Quality control in data collection, especially in the collection of qualitative information, is essential to ensure consistency in the types of information collected and the level of detail obtained. Steps will be taken prior to, during, and after site visits to ensure that staff collect consistent and high quality information. Prior to the site visits, all staff will be trained in the use of interview and observation protocols and how they relate to the major research questions we seek to address. While all staff proposed to conduct the field visits already have extensive fieldwork experience, this training ensures that there is consensus among research team members about the purpose and priorities of questions in the interview protocols. Role play and mock interviews will allow staff to practice interviewing and observation techniques.
At the beginning of each day on site, team members will review the information that each respondent is expected to provide, results of the background document review that are relevant, and the protocol that will be used for each interview. At the end of each day, team members will confer and identify the key information gained from each interview, questions that were not fully addressed, and how these questions may be addressed by other respondents. Staff will also critique their own performance, highlighting what worked well in interviews as well as techniques that may help improve questions that respondents may have had difficulty answering. Upon return from the field, staff will be expected to debrief other team members on key findings, insights or hypotheses they may have, and potential improvements to the interview protocols. Each site visit team will also be required to write a site visit summary report that summarizes the information collected. The director of fieldwork will review each report to ensure that the necessary information is consistently collected.
B3. Methods to maximize response
Youth Survey
As stated earlier in Section B1., we expect a 90% response rate for the youth survey at baseline. At the time of the baseline interview most youth will still be in foster care making it easy to find them. However, we recognize that some respondents may be distrustful and otherwise difficult to engage. Gaining their cooperation will require training interviewers on how to foster trust and create a neutral interviewing environment. Keeping youth connected to the same interviewers across rounds will also help maintain cooperation for the follow-up rounds. We have found that experienced interviewers tend to achieve higher levels of cooperation because they carry with them a larger number of combinations of behaviors proven to be effective for one or more types of respondents. Several of the sites involve youth who may be proficient only in Spanish. The instrument has been translated into Spanish for use with these youth.
Strategies for gaining respondent cooperation are woven throughout the training session to enhance the interviewers’ abilities to tailor their reactions to the respondent. In addition, we will use advance letters (See Appendix J) as a way to enhance response rates by anticipating and overcoming potential barriers to participation prior to interviewer contact. The letters will be personalized to increase the chances that the respondent will open and read the letter. Before mailing the advance letters for the baseline interview we will establish a toll-free hotline. The advance letter will inform respondents that they can call the number if they have any questions about the study or want to schedule or reschedule an appointment.
The use of incentive payments, as discussed in detail in Section A9, will also maximize response. The foster youth will want to be certain that the confidentiality of their responses to survey questions is protected. Confidentiality concerns are discussed in detail in Section A.
Caseworker Survey
Using a web-based survey will maximize response among caseworkers. While access to the web is not universal, we have found that the vast majority of child welfare workers do have access and prefer this type of survey over a phone or written one. The expected response rate for the caseworker survey is 90 percent. As discussed earlier, the caseworker survey is only administered in the Kern County evaluation site. All worker respondents at that site have convenient web access.
Program Site Visit Interview and Focus Groups
Each of the five sites included in the evaluation have agreed to participate in the overall evaluation including participation in the on-site visits. We do not foresee any difficulty obtaining the input of the various program staff and other related participants.
B4. Pretest procedures and results
Youth Survey
The questionnaire has been designed almost exclusively by using questions from existing surveys. To the maximum extent possible, these questions have come from federally funded surveys previously approved by OMB. In order to gauge the suitability of these questions for this population, we administered the interview to three foster care youth who were not involved with this study. Given the base upon which the questionnaire is built, a small number of respondents was deemed adequate to determine the average time and approximate range of times expected in the evaluation.
The more critical part of data collection in this evaluation is the sample identification and random assignment process. Prior to beginning the baseline data collection in a given site, NORC will conduct a pilot test to insure that the data collection procedures are working properly. The pilot test will be conducted once the site becomes ready for the evaluation. The goal of the pilot test will be to test the data collection protocols, including random assignment procedures, case management systems, and liaison between Field Management staff and local program staff. All procedures will be tested from the point of referral up to, but not including, interviewing the youth.
In each of the evaluation sites, we assume that there will be a designated individual in the public welfare agency authorized to screen potential referrals for the program. For the pilot test, this person will be asked to obtain assignments (treatment or control) for approximately four youth. This person will follow procedures for random assignment described in item B1. Upon assignment of a case, NORC’s system will assign a case number for tracking purposes and notify the local NORC Field Manager of the case and its assignment. The system will also establish a record in the central study management system to track all data collection activities on the case and produce a letter to the youth and a letter to the youth's caseworker.
The pilot test shall verify that all systems are working, that case referral to NORC is performed appropriately, and that random assignment is made and executed appropriately. In the pilot test, NORC will test all of these procedures for approximately four youth in each site. The pilot will last approximately one to two weeks, depending on the case flow at the site. The purpose of the pilot will be to test the procedures only; no interviews will be conducted with the test youth. We will ensure that all procedures are working appropriately before beginning the baseline data collection.
Caseworker Survey
Early results from the two Los Angeles program sites resulted in a low response rate by caseworkers. The program site was unable to provide the additional support that would be needed to obtain a higher response rate. The caseworker survey is being administered only in the Kern County site.
B5. Statistical consultation
Statistical consultation for the study is being provided by Fritz Schueren at NORC. Consultation on the impact analysis is being provided by Stephen Bell at Abt Associates.
As stated earlier, the Urban Institute, together with two subcontractors, is the contractor for the Multi-site Evaluation of Foster Youth Programs. NORC maintains primary responsibility for the youth survey, the Urban Institute is responsible for the implementation of the caseworker survey, and Chapin Hall will assist the Urban Institute in conducting the program site visits. Staff from each of the three organizations will be responsible for the analyses and report writing.
1 It is also possible for programs to show no effects at the end of service but to have delayed effects that appear only later, in a follow-up measure.
2 A $20 hourly rate is equivalent to an annual salary of approximately $40,000. While child welfare worker salary data are limited, we know from “Report from the Child Welfare Workforce Survey” (American Public Human Services Association) that the average maximum annual salary of a child protective services (CPS) worker is $45,000. According to the Child Welfare League of America’s 1999 State Child Welfare Survey, the average maximum salaries for CPS workers is $40,000. We believe that using the average maximum salaries is appropriate given that workers will be responding after work.
3 M.E. Collins, Transition to adulthood for vulnerable youth: A review of research and implications for policy. Social Service Review, 2001; T. McDonald et al, Assessing the long-term effects of foster care: A research synthesis, Child Welfare League of America, 1996.
4 The Foster Youth Transitions to Adulthood Study, Mark Courtney and Irving Piliavin, 2001.
5 The Federal Register announcement was based on an assumption that the evaluation would continue in the four existing program sites and begin in a fifth site (Oakland).
6 As noted previously, it is also possible for programs to show no effects at the end of service but to have delayed effects that appear only later, in a follow-up measure.
7 There is little basis for these designations, but it might be noted that an effect size of .2 is equivalent to an r2 of .01. For an effect size of .5, r2 = .06.
8 There may well be differential attrition in the treatment and control groups, since it may be easier to keep track of treatment group youth.
9 All power values were determined using Russ Lenth’s power website: http://www.stat.uiowa.edu/~rlenth/Power/index.html.
10 If subgroups were represented as covariates in regression equations, the power will be somewhat higher.
11Random numbers generated in this way (through a random number generator) are more accurately referred to as pseudorandom numbers since the process used to generate them is not entirely random. However, since numbers produced by a high-quality random number generator are statistically indistinguishable from genuinely random numbers, statisticians generally agree on the scientific soundness of this approach.
File Type | application/msword |
File Title | REQUEST FOR CLEARANCE |
Author | Stephanie Weinstein |
Last Modified By | mwoolverton |
File Modified | 2006-08-07 |
File Created | 2006-08-07 |