OMB Package 2-Part B with responses 9-26-08

OMB Package 2-Part B with responses 9-26-08.doc

Early Head Start Family and Child Experiences Survey (Baby FACES)

OMB: 0970-0354

Document [doc]
Download: doc | pdf

CONTENTS (continued)



Section Page





Descriptive Study of
Early Head Start

(Early Head Start Family and Child Experiences Study; Baby FACES)


Supporting Statement for

Request for OMB Approval

for Program Recruitment

and Data Collection: Section B



September 26, 2008


U.S. Department of Health and Human Services

Administration for Children, Youth and Families

4th Fl. West, 370 L’Enfant Promenade, SW

Washington, DC 20047



Project Officer:

Rachel Chazan Cohen








CONTENTS

Section Page

B Collections of Information Employing Statistical
Methods 1

B.1 Respondent Universe And Sampling Methods 1

B.2 Procedures For The Collection Of Information 4

B.3 Methods To Maximize Response Rates And Deal With Nonresponse 12

B.4 Test Of Procedures Or Methods To Be Undertaken 13

B.5 Individuals Collecting And/Or Analyzing Data And Individuals Consulted On Statistical Aspects 13





APPENDIX D: BABY FACES INSTRUMENTS

APPENDIX E: VARIANCE AND POWER TABLES


B. Collections of Information Employing Statistical Methods

B.1 Respondent Universe and Sampling Methods

The study’s data collection approach, analysis, and policy conclusions all rest on the adequacy of the sample of programs and children. The sample will be designed to be representative of the population being served by the Early Head Start program nationally. To achieve the goal of an efficient, representative national sample of sufficient size to detect developmentally or programmatically meaningful differences over time or by key subgroups, we propose a stratified clustered sample design.


Sampling Programs. Baby FACES will use a stratified clustered sample design. We will select a probability sample of Early Head Start programs using the Head Start Program Information Report (PIR) as the sample frame. As specified in the request for proposal, we will exclude from the sample programs in Alaska, Hawaii, Puerto Rico, and U.S. territories; migrant programs; and American Indian/Alaska Native programs. We also will exclude any programs not directly providing Early Head Start services and any programs under the management of the national interim grantee contractor.


When sampling programs, we will form eight explicit strata, first stratifying the frame by the program’s total enrollment size (four strata) and then by whether the majority of children served are likely to be dual language learners (DLL), based on the children’s primary home language. We plan to implicitly stratify program service approach (center-based, home-based, or mixed) within explicit strata. After sorting by program service approach, we will also implicitly stratify by census region and urbanicity (MSA versus non-MSA). Before selecting the sample, we will use an optimal allocation approach (balancing cost and variance) to determining the number of programs to allocate to each size stratum. Selecting more programs from the larger strata will help ensure that we end up with enough study-eligible children in later stages of selection. We will proportionally allocate the program sample between the DLL and non-DLL substrata within each program size stratum. Within each explicit stratum, we will select a sequential, equal probability sample. The sequential sampling technique, based on a procedure developed by Chromy,1 offers all the advantages of the systematic sampling approach but eliminates the risk of bias associated with that method.


We will initially select 180 programs, and then pair up adjacent selected programs within strata. (These paired programs would be similar to one another with respect to the implicit stratification variables.) We will then randomly select one from each pair to be released as part of the main sample of programs. After an initial group of 90 programs is selected, we will ask the Office of Head Start to call the regional ACF offices to confirm that these programs are in good standing. If confirmed, each program will be called and recruited to participate in the study. If the program is not in good standing, or is in good standing but refuses to participate, we will release into the sample the other member of the program’s pair and go through the same process of confirmation and recruitment with that program. The goal is 90 participating programs.


Sampling Children. Because of the rate of development of the infants and toddlers participating in these programs, the measures being used in the study are, by necessity, age-specific. Our longitudinal age cohort study design calls for selecting all children in the spring of 2009 who are within a four-month perinatal window for newborns or within a four-month window of their first birthday. These children will then be followed in the study until they are age 3 for newborns and age 3½ for the age 1 cohort unless they leave Early Head Start before reaching those ages.


About two weeks before visiting each participating program in the spring of 2009, we will ask the programs for a list of all centers and home visitors within the program. From each center director, we will ask for a list of all classes (classroom sessions) and current rosters for each class. We will obtain the current roster of children served by each home visitor from either the program director or one of the center directors, as appropriate. These rosters will contain each child’s name and date of birth. We will also get a list of all pregnant women currently being served by the Early Head Start program along with due date or gestational age. From these dates of birth, we will identify the children and pregnant women whose birth dates or due dates qualify them to be in one of the two cohorts for this study, using either the date of the roster or the date of the program visit to calculate age or gestational state.


We will also ask the center directors to identify any siblings (twins and otherwise) and soon-to-be siblings2 among the selected children. To minimize burden on families, we propose to probabilistically select one child from each family to participate in the study if both were randomly selected.


Table B.1 shows the expected sample sizes in the spring of 2009.

TABLE B.1

EXPECTED SAMPLE SIZES IN SPRING 2009

Cohort

Data Collection Respondent

Spring 2009—90 Programs

Within Age Range (Selected)

Eligible/with Consent/Responding

(90 Percent)

Perinatal

Parent

1,262

1,136

Childa



One-Year

Parent

946

851

Child

946

851

Both Cohorts Combined

Parent

2,208

1,987

Child

946

851


Table B.2 shows the expected sample sizes for each wave of data collection. We expect 15 percent of the children (and their parents) to leave the Early Head Start program each year. Included in this projection are pregnant women whose pregnancies do not result in live births and those who give their newborns up for adoption. We estimate that, despite our best locating efforts, we will be unable to contact about 10 percent of the sample still in the program at each one-year interval and 5 percent between age 3 and 3½.



TABLE B.2
EXPECTED SAMPLE SIZES THROUGHOUT DATA COLLECTION

Cohort

Data Collection Respondent

Responding

Spring 2009

Spring 2010

Spring 2011

Age 3½
(Fall 2011)

Spring 2012

Perinatal

Parent

1,136

1,023

782


598

Child


869

665


509

Age 1

Parent

851

766

586

473


Child

851

651

498



All Cohorts

Parent

1,987

1,789

1,368

473

598

Child

851

1,520

1,163


509


Note: When combining all age cohorts, and after accounting for the impact of the sample design on the variance, the effective sample size at baseline (for the 1,987 parent interviews) is about 891.


B.2 Procedures for the Collection of Information

Statistical Methodology for Stratification and Sample Selection

This issue is covered in section B.1

Estimation Procedures

We will produce both cross-sectional and longitudinal weights to be used for analyses of these data. These weights will account for the probability sampling of programs and of siblings for families with more than one child selected. We will also weight up the sampled children to represent all children served by the Early Head Start program, because our sample only included those whose birthdates fell within the specified eligibility window around our visit. At both the program and child and family levels, we will account for ineligibility and adjust for nonresponse among the eligible, most likely using a weighting class approach. Because this is a stratified clustered sample design, specialized techniques are needed to correctly calculate the variance associated with estimates. One such technique uses the Taylor series linearization approach. This technique is available in specialized statistical packages such as SUDAAN, and as specialized components within general statistical packages such as SAS and Stata.

Degree of Accuracy

The appendix contains a pair of tables (Tables E.1 and E.2) that show 95 percent half-confidence intervals for both child assessments and quality measures. For the quality measures only, we assumed that about one-half of the children would be receiving each type of service and about 80 percent of the programs (72 of 90) would be providing each type of service. The appendix also contains a set of six tables that show minimum detectable differences and effect sizes for: (1) comparing scores between two program-defined subgroups at a point in time,3 (2) comparing scores between two child-defined subgroups at a point in time,4 and (3) comparing scores over time (between ages 1 and 3).5 The sample sizes described in section B.1 should be large enough to detect developmentally meaningful differences, given various assumptions about the sample design and its impact on the variance of estimates.


Our plan for data collection is also intended to yield accurate data on the children in this study. A concern about bias from parents is faced by all studies of young children. When possible, we are collecting ratings from multiple reporters on children’s behaviors, including parent report, teacher/home visitor report, as well as direct child assessment. We will also be clear in our report to attribute the source of the data and to report what inter-correlations exist between reporters.


Each caregiver will complete reports, on average, for 3.2 children. We do not think that teachers will have trouble differentiating across the few children they are being asked about, particularly because they are already asked to report daily to parents and ongoing assessment is a feature of Early Head Start programs. We anticipate that children will have different caregivers over time and one of the aspects of service we are interested in is continuity of caregivers over the child’s time in the Early Head Start program. We will be analyzing the data for caregiver effects.

Data Collection Procedures

As noted previously, we propose to collect information from several sources: Early Head Start parents, children, home visitors, primary caregivers, and program directors, across four data collection waves. Data collection will be annually over four years, with the first wave taking place in spring 2009. An additional data collection wave is planned for fall 2011, when the age 1 cohort children are three and one-half and transitioning out of Early Head Start. Table B.3 shows data collection waves, time periods, and methods.


TABLE B.3
Summary of Data Collected by Data Source, Cohort, and Wave

Cohort

Data Source

Spring 2009

Spring 2010

Spring 2011

Spring 2012

Perinatal

Child Age

0

1

2

3

Parent Interview

CATI

CATI

CATI/CAPI

CATI/CAPI

Direct Child Assessment



CADE

CADE/CAPI

Primary Caregiver/Home Visitor Interview

CAPI

CAPI

CAPI

CAPI

Classroom/Home Visit Observation**


CADE

CADE

CADE


Parent-Child Interaction



Video

Video


Primary Caregiver/Home Visitor Ratings


PAPI

PAPI

PAPI

Age 1

Child Age

1

2

3

3.5*

Parent Interview

CATI

CATI/CAPI

CATI/CAPI

CATI*

Direct Child Assessment


CADE

CADE/CAPI


Primary Caregiver/Home Visitor Interview

CAPI

CAPI

CAPI


Classroom/Home Visit Observation**

CADE

CADE

CADE



Parent-Child Interaction


Video

Video



Primary Caregiver/Home Visitor Ratings

PAPI

PAPI

PAPI


Program

Program Director Interview

Semi-structured telephone interview/

SAQ

Semi-structured telephone interview/

SAQ

Semi-structured telephone interview/

SAQ

Semi-structured telephone interview/

SAQ


Service Tracking Form

Web/Hard copy

Web/Hard copy

Web/Hard copy

Web/Hard copy


*Abbreviated interview at 42 months to learn about transitions out of Early Head Start that occurs in fall 2011.

** Observation visits do not impose burden on study participants but are listed for completeness.


Among the data sources, the parent interviews and service tracking forms will require the most input of time over the course of the study. The next two sections explain the necessity of these instruments for the purposes of this study and ways that we will limit respondent burden. The sections following will describe the data collection procedures.

Parent Interview

On average, the parent interview is approximately one hour in length at each time point. Annual follow up interviews are needed to measure child growth and change over time, a critical piece of our research questions. In our efforts to reduce respondent burden we are collecting data at only one time point for those family characteristics that have been shown in the literature to be stable over time. However, because we are also interested in measuring change over time, we must ask each year about family characteristics that are likely to change. Interviews are conducted using computer-assisted technology, thus, answers to questions from the previous interviews can be fed into the updated interview and respondents will only be asked if there has been a change in the past year. For example, the household roster from the previous interview will be fed forward to the followup interview, and respondents will be asked to confirm each household member rather than having to enumerate each member of the household each year. The length of the annual interview is consistent with other longitudinal child development studies such as the Early Childhood Longitudinal Study – birth and Kindergarten cohort, FACES, and Building Strong Families.

Family Services Tracking

Understanding services is a key focus of the study and an area that was not as well documented in previous EHS research. It is critical for program decision-making at the local and national levels. The Family Services Tracking system is a critical piece that will enable us to link change over time in child and family functioning to specific services received. For example, it will allow documentation of patterns of service provision and use over time by program, family, child, and other sources of variation, for example, program approach, age of child, family risk level, and season. It will also allow for documentation of service use as families begin to disengage with the program. Capturing these patterns will help inform national training and technical assistance efforts by supporting the development and implementation of strategies that may be effective in targeting families for additional services if they exhibit service use patterns predictive of program exit.


Documentation of program dosage at the participant level (at the family and child level in Baby FACES) will support analyses of:

  1. Program implementation—for example, whether and for what proportion of families home visits occur weekly, as required by Early Head Start performance standards

  2. Patterns of service use for important family- and program-level subgroups—for example, families at the greatest demographic and psychological risk, and families in different program models

  3. Changes over time in the intensity of services received—for example, whether Early Head Start center-based days in care decrease as children get older or if seasonal factors affect service delivery.

  4. Matching of services to identified family needs and goals—for example, whether families with identified child care needs are receiving center-based Early Head Start services

  5. Possible non-experimental linkages between service dosage and child and family outcomes—for example, whether families that receive more child development services achieve better outcomes over time.

Next, we describe why ongoing tracking of services is not only necessary to answer the questions we have posed in Baby FACES, but is consistent with previous studies and recommendations from experts in our Technical Work Group.


Background and Rationale. Across a range of interventions, researchers, program administrators, and policy makers find it increasingly important to document fidelity to an intervention program model by using a service tracking tool like the one proposed for Baby FACES. Over the last 20 years, early childhood intervention researchers have documented the critical role service dosage received by families and children plays in predicting both short- and long-term program outcomes. Examples include a number of studies of the Infant Health and Development Program (IHDP) demonstrating that higher program exposure (number of home visits over the first three years; number of child development center days in the second and third years; number of activities engaged in during the home visits) was associated with child intelligence quotient scores and child and maternal positive behavior during a mother-child interaction task (Klebanov and Brooks-Gunn 2008; Ramey et al., 1992; Sparling et al. 1991). In a meta-analysis of studies of 60 home visiting programs for families with young children, the number of home visits and the number of hours of home visits was significantly associated with children’s cognitive outcomes (Sweet and Applebaum 2004).


As we designed the Baby FACES service tracking approach, we reviewed the service tracking systems put in place in a number of other recent studies conducted in Head Start, Early Head Start, and in other types of programs. MPR has successfully implemented service tracking management information systems in the Early Head Start Enhanced Home Visiting Evaluation (OMB Number 0970-0314) and in the Head Start Oral Health Initiative Evaluation (OMB Number 0970-0277) (Paulsell et al. 2006; Del Grosso et al. 2008). Both projects involved grantees entering service data into a data system designed specifically for the evaluation. Staff were requested to complete data entry in real time. Relevant examples from other types of studies and programs include development of service tracking systems for a welfare to work project, a marriage and relationship education program, and a youth services program for individuals with disabilities. MPR is also using a web-based management information system as a data source on program participation as part of our Gates Foundation funded Early Learning Initiative evaluation.


Given that many of the Early Head Start programs selected for the study will already have some way of tracking services, we are not proposing to build a single, comprehensive MIS for all 90 programs, as in the above examples. But given our understanding of the variability in the existing systems and the way data elements are defined, there is a need for a common set of measures. To reduce complexity and burden as much as possible, we have designed the service tracking tool to be simple and straightforward . We also expect that some sampled programs may not actively track these data electronically and may find the weekly service tracking to be a useful management tool.


Expert Recommendations. The Baby FACES Technical Work Group (TWG; which included a former program director, a representative from the Early Head Start national resource center, as well as researchers familiar with earlier Early Head Start studies) strongly recommended that the study collect family/child-level service data directly from program staff, citing major concerns about using parent reported data to quantify service intensity as was done for the Early Head Start Research and Evaluation Project (ACF 2002). Drawbacks to parent-reported data include issues of recall (parent interviews are only scheduled to occur once per year) and non-response (if a family does not participate in one of the annual data collection interviews, we will not have any information on service use and dosage for that family). The TWG was also concerned about a service tracking approach that occurred less often than weekly because from their experience, it would be challenging to ensure the quality of the data if longer intervals were selected. In addition, a weekly reporting schedule was likely to be very close to what programs might already be doing to meet requirements to document program enrollment and child care licensing standards (related to documenting group size and adult-child ratios).


In addition to the studies described above, we have also identified examples of the frequency of service receipt data collection to document fidelity required by program developers when they allow replication of their models. One of the most widely used evidence-based home visiting programs for families with children prenatal to age 2, the Nurse-Family Partnership, requires that nurses enter data on their completed home visits within 48 hours of conducting each visit (personal communication with K. Teter, September 11, 2008). The program office reviews data for completeness every month and works with replication sites to meet data entry and data quality requirements for participation.


Input from Program Directors. As we developed the tracking tool, we reviewed four family files provided to us from one Early Head Start program to learn what type of information was regularly kept about services and referrals. These data informed the design of the tracking tool. To further explore our service tracking plans and get feedback from Early Head Start program directors, we conducted a focus group with six program directors at the April meeting of the National Head Start Association (five of the six directors ran programs that provided both home-based and center-based child development services). After reviewing the types of data that would be included in the family services snapshot, the directors reported that they collect similar data and that most of them collect it weekly. Most of the program directors reported that it would not be too burdensome to provide this information in the format needed for the study on a weekly basis. They indicated a preference for a regular schedule of data collection, thinking that it would become part of the routine for care providers.


Alternatives Considered. There are two main alternatives we considered to the planned service tracking approach (1) sampling and (2) using existing program data systems. We considered whether it would be possible to sample program participation data—for instance, collecting data only during certain weeks—rather then collect it on all families/children in the study during their enrollment. Because we do not yet know the pattern of service delivery in Early Head Start sites, continuous data collection is preferable from an analytical standpoint. Depending on how the weeks are sampled, there may be a risk of a seasonality effect, particularly if the selected weeks are not a true random sample—for example, if they are selected via a systematic sample. Sampling certain weeks for family service tracking also means that a sampling plan must be designed and implemented in the field. Presumably the weeks would be sampled with the same probability across families within programs, so there would be no need for weighting and no additional design effects introduced if the analysis is at the child level. However, the measurement of service use for each family will be less reliable if it is based on a sample of weeks than on a census of weeks. If the analysis of services is done at the family-week level and not the family level, there is a loss of statistical power due to the reduced sample size of family-weeks.


Sampling will also pose challenges to addressing key research questions about how services are tailored to family needs and how services are associated with child and family well-being. It would also be challenging to train program staff to track for some period of time, stop tracking, and then resume tracking. We believe that it will be easier to support staff in getting in the routine of collecting and entering these data and that the routine nature of using the tracking tool will help to guard against over reporting.


From our experience, it is much more costly to attempt to extract the needed information from existing program data systems than to develop a common tracking tool. In addition, if the data are collected and tracked in very different ways, we cannot be sure the data are comparable across programs. The planned training of programs on the common system will ensure consistency and reliability of the data collected.


Data Verification. Although programs may have an incentive to over report completed home visits or children’s attendance, evidence of fraud is rare in early childhood programs and the consequences are often severe (personal communication with K. Teter, September 11, 2008). To check on the possibility of fraud, every six months we will contact families to confirm reported visits and child care attendance for five percent of entries across program staff members using the tracking tool. If we find discrepancies in family and program reports, we will work with the programs to determine the source of the issue and identify a process for addressing it through additional staff training.


Program and Participant Recruitment. After sampling, programs will be recruited into the Baby FACES study by MPR coordinators. They will provide programs with copies of a full-color brochure introducing them to the study, a brief study description, and parental consent forms (Appendix C). The MPR coordinator will also work with the program director to identify an on-site coordinator who will assist with recruiting families into the study, scheduling the annual on-site data collection visit, and identifying the mode for completing the family services tracking forms that is most convenient for the program.


Once programs agree to participate, the MPR coordinator will send the program director a short self-administered questionnaire (SAQ) that may require the director to draw information from enrollment or staffing records. After the SAQ is completed, the MPR coordinator will schedule a semi-structured telephone interview for the remainder of the program director interview and will go over the SAQ with the director to answer any items that may be incomplete. Each annual program director interview will be conducted in the same way.


The MPR Baby FACES coordinator will also be tasked with ensuring the completion of the family services tracking system. The family tracking forms are to be completed weekly by the home visitors and primary caregivers of sampled children. Forms may be completed as hard copy or directly online, depending on the program’s preference. MPR coordinators will work with the on-site coordinator to find the mode most comfortable for primary caregivers and home visitors to complete these forms, with the on-site coordinator responsible for making sure forms are completed and data entered.


The MPR Baby FACES coordinator and the on-site coordinator will work together to recruit families into the study, using biweekly phone calls. The MPR coordinators and the on-site coordinator will distribute and collect consent forms for participation in the study. Consent forms will be provided in both Spanish and English and will be accompanied by a full-color brochure and study description (Appendix C).


Annual On-Site Data Collection Visit. The Baby FACES coordinators and the on-site coordinators will work together to schedule a week-long visit for the MPR field interviewing team to conduct the on-site data collection activities. The field period of 12 weeks will begin in late February and last through mid May. An average of 7.5 programs will be visited each week by a team of 3 to 5 trained field interviewers throughout this period. Each field interviewing team will include a team leader, assistant team leader, and 1 to 2 field interviewers. Four weeks prior to the data collection visits we will send advance letters to each eligible, consenting family informing them of the week of our field visit and stating that MPR will be contacting them to conduct the telephone parent interview within the next 3 weeks (see Appendix C). Parents of children who have turned 2 or 3 will be asked to make an appointment for the direct child assessment during the on-site field data collection week. Three weeks prior to the on-site data collection week, trained telephone interviewers will begin conducting computer-assisted telephone interviews with parents. When children reach age 2 and 3, parent interviews that are not completed prior to the data collection week will be scheduled by the team leader with the assistance of the on-site coordinator during that week.


During the on-site data collection visit, field interviewers will visit each site to conduct the following activities:


  • Home visitor/primary caregiver interviews

  • Home visit/classroom observations

  • Parent interviews (in-person follow up for children ages 2 and 3 only)

  • Direct child assessments (for children ages 2 and 3 only)

Quality Control

We have instituted a variety of methods to ensure the quality of the data collected. All field and telephone interview staff will be trained to 85 percent reliability, measured by an MPR gold standard. In order to be certified to conduct parent interviews, both field and telephone interviewers will need to be certified by supervisory staff prior to leaving training. Bilingual field interviewers and telephone interviewers will be required to pass a Spanish language test before being certified to conduct parent interviews or child assessments in Spanish. During the field period, telephone interviews will be monitored from the Survey Operations Center.


Field interviewers will attend an annual training program on the instruments they will be using. Each training program is expected to last approximately 5 to 7 days. During training we expect that field interviewers will be reliable, with a gold standard, for home visit observations, classroom observations, and direct child assessments. In-field reliability testing against a gold standard will be mandatory before leaving training to conduct observations, and gold standard certification with a 2- or 3-year-old child will be mandatory before leaving training to conduct child assessments. Bilingual field interviewers will attend separate training sessions on administering the Spanish language direct child assessments and will need to pass separate but similar certification standards as those in English.


Between the fourth and eighth week of the field period a quality assurance visit will be conducted with each data collection team. These visits will be conducted by a gold standard quality control observer. Observers will recertify each interviewer on parent interviewing and child assessment skills, and conduct inter-rater reliability tests with observers following classroom and home visit observations. Within each data collection team, team leaders will be responsible for conducting periodic observations of parent interviews and child assessments. Two observations by the team leader will be conducted on each member of the team during the field period. The two classroom and home visitor observers on each team will also be required to conduct two simultaneous observations during the field period and calculate inter-rater reliability.

B.3 Methods to Maximize Response Rates and Deal with Nonresponse

Early Head Start programs will be motivated to participate because they are vested in the success of the Early Head Start program. Eighty-nine percent of programs completed the Survey of Early Head Start Programs (SEHSP), and FACES 20066 attained response rates of more than 90 percent during the first three rounds of data collection for Head Start program children and their families. In Baby FACES, MPR will continue the procedures that worked for us on these other projects, eliminating the need for a pretest. ACF will send a letter signed by Dr. Rachel Chazan Cohen, the federal Project Officer and a member of the senior staff at the Office of Head Start, to selected programs describing the importance of Baby FACES, outlining the study goals, and encouraging their participation. Section A.9 of this submission discusses compensation payments to be made to programs, incentive payments to parents for completion of an interview, and gifts to children for participating in the assessments. All of these, which we have used in other similar studies, will help ensure a high level of participation. Obtaining the high response rate we expect to attain makes the possibility of nonresponse bias less likely, which in turn makes our conclusions more generalizable to the Early Head Start population.


Families that choose to leave the Early Head Start program prior to age 3½ will complete a short exit interview by telephone to collect information on their reasons for leaving Early Head Start. Because these families may be difficult to track down for a telephone interview, we will use specialized locating resources, ranging from calling contacts the respondent listed in the interview to directory assistance, and database searchers such as LexisNexis and Accurint.


Our response rates will be calculated according to industry standards, such as those laid out in the American Association of Public Opinion Research (AAPOR) standard definitions. They will be calculated separately for the program and child and family levels, as well as cumulatively across the two stages. They will be calculated separately for child assessments and parent interviews, and possibly for the combination of the two. The numerator of each response rate will be eligible completes and the denominator will be eligible sample members. (We assume that eligibility status will be known for all sample members.) Our expected response rates are well above the 75 percent threshold discussed in the OMB guidelines.

B.4 Test of Procedures or Methods to be Undertaken

Under the previous submission, ACF has received clearance permission to conduct a pilot study at two sites focused on testing the administration of two of the test batteries planned for the main study: the Preschool Language Scale, 4th Edition (PLS-4) and Bayley Scales of Infant and Toddler Development, Third Edition, Screening Test. The other batteries selected for the direct child assessments, the Peabody Picture Vocabulary Test (PPVT IV; Dunn et al. 2006) and the Test de Vocabulario en Imagenes Peabody (Dunn et al., 1986) have been used successfully with similar populations in other studies, such as FACES.


Most of the scales and items in the proposed parent interview, the director interview, and the home visitor/primary caregiver interview have been successfully administered to the Early Head Start population in the past. As a further test of all the items together, their flow, and cohesiveness, we plan to pretest each of the interviews with fewer than 10 respondents during the larger pilot test.

B.5 Individuals Collecting and/or Analyzing Data and Individuals Consulted on Statistical Aspects

MPR. and its subcontractors Twin Peaks Partners, LLC; Branch Associates, Inc.; Shugoll Research; ZERO TO THREE; Brenda Jones Harden; and Alphabet Soup Bookstore are conducting this project under contract number HHSP23320072914YC. The plans for statistical analyses for this study were developed by MPR. The team is lead by Rachel Chazan Cohen, project officer; Cheri Vogel, project director; Kim Boller, principal investigator; Cassandra Meagher, survey director; and Linda Mendenko, deputy survey director.


Additional staff consulted on statistical issues at Mathematica Policy Research, Inc. include Daniel Kasprzyck, Director, Statistical Services; John Hall, senior statistician; Barbara Carlson, senior statistician; and John Deke, senior researcher.

1 The procedure makes independent selections within each of the sampling intervals while controlling the selection opportunities for units crossing interval boundaries. Chromy, J.R. “Sequential Sample Selection Methods.” Proceedings of the Survey Research Methods Section of the American Statistical Association. Alexandria, VA: American Statistical Association, 1979.

2 This situation would arise if a child is in the Early Head Start program and his or her pregnant mother is also receiving Early Head Start services.

3 Table E.3 for child assessments and Table E.6 for quality measures.

4 Table E.4 for child assessments and Table E.7 for quality measures.

5 Table E.5 for child assessments and Table E.8 for quality measures.

6 Head Start Family and Child Experiences Survey, 2006 Cohort.


File Typeapplication/msword
File TitleMEMORANDUM
AuthorBarbara Carlson
Last Modified ByMindy Hu
File Modified2008-09-26
File Created2008-09-26

© 2025 OMB.report | Privacy Policy