HHS/ACF/OPRE
HEAD START CLASSROOM-BASED APPROACHES AND RESOURCES FOR EMOTION AND SOCIAL SKILL PROMOTION (CARES) PROJECT:
2ND PACKAGE: IMPACT AND IMPLEMENTATION STUDIES
SUPPORTING STATEMENT B
FOR OMB CLEARANCE
The evaluation literature often discusses the appropriateness of the sample size for a study by focusing on the smallest program impacts that are likely to be detected with a specified level of confidence, assuming a sample of a given size and characteristics. These are usually called the program’s “minimum detectable effects” (MDEs). Analysis of MDEs is also referred to as “power analysis,” as it estimates the study's power to measure the effects it was designed to find.
Empirical information used. It was determined that a sample size of 17 grantees (grouped into 26 matched pairs), an average of 2 centers per matched pair (one control and one treatment center) per grantee, an average of 2.2 classrooms per center, and an average of 8 four-year-old children per classroom were sufficient to achieve the desired targets for precision (i.e., MDEs between 0.15 and 0.30 for student outcomes and between 0.30 and 0.40 for classroom or teacher outcomes) for the core study. Appendix B presents the precise MDEs for a number of outcomes given the matched-pair blocked sample of the core study in the three-treatment design.
These findings were computed using data from two national surveys of Head Start grantees/delegate agencies (i.e. Head Start Impact Study and FACES) and three group-randomized studies that focused on social and emotional outcomes for young children (two of which took place in Head Start centers). Results of these analyses were organized in an Excel spreadsheet program (the “EFFECT-O-SIZER”) that was designed to facilitate real-time analyses of MDEs for alternative experimental designs given the full range of outcome measures in the relevant literature. A separate spreadsheet program was developed for student outcomes and classroom/teacher outcomes. All findings are based on the assumption that impacts will be estimated with regression adjustments for student or teacher baseline characteristics and a baseline measure of the outcome (pretest).
As noted earlier, the three-treatment experimental design (with three treatment groups and a control group) will randomize Head Start centers within grantees/delegate agencies (blocks) and allocate the same number of centers to each treatment group and the control group (for a balanced design). The primary research questions addressed by the design will be:
What are the net impacts of each intervention (treatment) on student-level outcomes relative to current Head Start practice for four-year old students (who are enrolled either in classrooms with only four-year olds or in mixed classrooms along with three-year-olds)?
What are the net impacts of each intervention on classroom- or teacher-level outcomes relative to current Head Start practice?
Secondary research questions are:
Overall, what are the average net impacts of the three interventions (combined) on student-level outcomes and classroom- or teacher-level outcomes relative to current Head Start practice?
The study’s sample requirements are based on the desired precision of impact estimates that address its primary research questions. Thus they are driven by the desired precision of estimates of the net impacts of each treatment for four-year olds.
Overall Sample Requirements
The survey sample size for the kindergarten follow-up is based on the selected preschool sample of children that met the above requirements for power and precision. Based on this selected sample, the sample size for kindergarten follow-up teacher reports on individual children is 2,885 four-year-olds, and the sample size for the parent survey is 2,885 parents of four-year old children.
B2. Procedures for Collection of Information
The teacher report data will be collected through paper and pencil surveys, although in-person outreach and interviewing strategies will be used to maximize response rates. Parent surveys will be conducted over the phone. MDRC will work with SRM to develop strategies that ensure an 80 percent response rate. All completed surveys will be reviewed to ensure all applicable fields are correctly completed and that all relevant interviewer notes are included in the data set. Any open ended and “other, please specify” items will be coded based on codes developed at SRM and approved by MDRC. Preliminary data files will be created and shared – with documentation – with MDRC on an agreed-upon schedule.
Interviewer Selection. MDRC will work with SRM to ensure that the interviewers administering the parent surveys are professional interviewers, many of whom have worked on social research projects.
Interviewer Training. MDRC will work with SRM to ensure sufficient interviewer training. All interviewers will sign a confidentiality pledge during training. They will be instructed on the importance of maintaining confidentiality and told that breaches of confidentiality will lead to dismissal. MDRC will also work with SRM to establish procedures for monitoring parent surveys throughout the course of data collection.
Conducting Surveys. In all cases, the interviewers will explain the purpose of the survey, and inform respondents that they will receive a small incentive for completing the survey. Each interviewer will be prepared to answer any questions about the study that sample members might have.
Interviewer Supervision. Interviewing field staff will be supervised directly by staff from SRM.
The goal will be to achieve an 80 percent response rate in each site. In preschool, response rates for teachers completing teacher reports ranged from 93% and 96%, depending on the time point it was administered; response rates for parent surveys were 85% at baseline. Procedures for obtaining the maximum degree of cooperation include:
Conveying the purposes of the survey to respondents so they will thoroughly understand the purposes of the survey and perceive that cooperating is worthwhile;
Providing a toll-free number for respondents to use to ask questions about the survey and the survey firm’s staff;
Training site staff to be encouraging and supportive, and to provide assistance to respondents as needed;
Hiring interviewers who have necessary skills for encouraging respondent cooperation;
Training interviewers to maintain one-on-one personal rapport with respondent; and
Offering appropriate payments to respondents.
Interviewers will also be trained to distinguish "soft" refusals from "hard" ones. Soft refusals often occur when the sample member has been reached at an inopportune time. In these cases, it is important to back off gracefully and to establish a convenient time to call or come back rather than to persist at the moment. Hard refusals do occur and must also be accepted gracefully by the interviewer.
All surveys were pretested during the period of the original OMB clearance.
We consulted with an additional set of individuals outside of MDRC, in addition to Howard Bloom of MDRC (who is a lead member of the CARES project team), on the statistical aspects of the design and sampling, including: Carolyn Hill (Georgetown University); Stephanie Jones (Harvard University); Robert Nix (Pennsylvania State University); Mark Lipsey (Vanderbilt University); Stephen Raudenbush (University of Chicago); Tom Cook (Northwestern University); Jeff Smith (University of Michigan); Hendricks Brown (University of South Florida); Larry Hedges (Northwestern University).
File Type | application/msword |
Author | MDRCER |
Last Modified By | Molly Buck |
File Modified | 2012-04-05 |
File Created | 2012-04-04 |