Contract No.: 233-02-0086/HHSP233200600006T
Supporting Justification
for OMB Clearance
of Data Collection Instruments for the
Head Start Region III
“I Am Moving, I Am Learning”
Program Evaluation
February 23, 2007
Kimberly Boller
Robert Whitaker
Linda Mendenko
Barbara Carlson
Renée Nogales
Patricia Del Grosso
Daniel Finkelstein
Submitted to:
Administration for Children and Families Office of Research Planning and Evaluation Seventh Floor West 370 L’Enfant Promenade Washington, DC 20447
Project Officer:
|
Submitted by:
Mathematica Policy Research, Inc. P.O. Box 2393 Princeton, NJ 08543-2393 Telephone: (609) 799-3535 Facsimile: (609) 799-0005
Project Director: |
CONTENTS
Chapter Page
I. JUSTIFICATION 1
A. CIRCUMSTANCES NECESSITATING DATA COLLECTION 1
1. The Childhood Obesity Epidemic and Head Start 1
2. A Description of I Am Moving, I Am Learning 1
3. Rationale for the Proposed Approach to this Evaluation 3
4. Proposed Data Collection Activities 7
5. Ensuring High-Quality Data 13
B. HOW, BY WHOM, AND FOR WHAT PURPOSE INFORMATION IS TO BE USED 13
c. use of automated, electronic, mechanical, and other technological collection techniques 14
d. efforts to avoid duplication of effort 14
e. sensitivity to burden of small entities 14
f. consequences to federal program or policy activities if the collection is not conducted or is conducted less frequently than proposed 15
g. special circumstances 18
h. federal register announcement and consultation 18
1. Federal Register Announcement 18
2. Consultation 18
I. payments of gifts to respondents 18
J. CONFIDENTIALITY OF THE DATA 19
K. ADDITIONAL JUSTIFICATION FOR SENSITIVE QUESTIONS 20
l. estimates of hour burden of the collection of information 20
m. estimate of total annual costs and burden to respondents or recordkeepers 21
n. estimates of annualized costs to the federal government 21
o. reasons for program changes or adjustments 22
p. plans for tabulation and publication and schedule for the project 22
1. Publication Plans 22
2. Tabulation Plans 23
Q. APPROVAL NOT TO DISPLAY THE EXPIRATION DATA FOR OMB APPROVAL 24
r. exception to the certification statement 24
II. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS 25
A. RESPONDENT UNIVERSE AND SAMPLING METHODS 25
1. Selecting 30 Programs for Stage 2 Phone Interviews 25
2. Selecting 16 Programs for Stage 3 Site Visits 27
b. statistical methods for sample selection and degree of accuracy needed 28
c. methods to maximize response rate and to deal with nonresponse 29
1. Stage 1 Questionnaire 29
2. Stage 2 Telephone Interviews and Stage 3 Site Visits 30
d. test of procedures and methods to be undertaken 31
e. PERSONS consulted on the statistical aspects of the design 31
references 33
appendix a: ADVANCE LETTER, OHS Letter, AND gRANTEE
QUESTIONNAIRE a.1
appendix b: grantee telephone interview guides AND SUMMARY REPORT TEMPLATES B.1
appendix c: GRANTEE SITE VISIT INTERVIEW GUIDES, SUMMARY REPORT TEMPLATES, and OBSERVATION CHECKLIST DOMAINS c.1
I.1 OVERVIEW OF PROPOSED STUDY DESIGN AND DATA COLLECTION ACTIVITIES, BY STUDY STAGE 8
I.2 PRELIMINARY GRID OF RESEARCH TOPICS AND DATA COLLECTION ACTIVITIES, BY STUDY 9
I.3 ESTIMATED RESPONSE BURDEN FOR RESPONDENTS FOR THE REGION III I AM MOVING, I AM LEARNING EVALUATION 21
I.1 BASIC LOGIC MODEL FOR IM/IL ENHANCEMENTS 5
There are two to three times as many obese children in the United States as there were 20 years ago. To arrest this trend, both the Surgeon General and the Institute of Medicine have suggested that obesity prevention efforts should begin early in life, primarily because the prevalence of obesity has increased even among preschoolers, many of whom remain obese into adolescence. Furthermore, obesity increases children’s risk factors for heart disease, and it can erode children’s quality of life by leading to social isolation and stigmatization by peers. Also, when obese children reach adulthood, they are more likely to remain obese and to die prematurely. Some have speculated that today’s children may even experience a shorter life span than their parents because of the obesity epidemic (Ogden et al. 2002; Sherry et al. 2004; Freedman et al. 1987 and 1999; Schwimmer et al. 2003; Latner and Stunkard 2003; Strauss and Pollack 2003; Whitaker et al. 1997; Gunnell et al. 1998; Olshansky et al. 2005).
Head Start, with its almost one million low-income preschool children from diverse racial/ethnic backgrounds, is potentially an ideal setting for developing and implementing obesity prevention efforts. While there are no national estimates of the prevalence of obesity among children in Head Start, it is likely to be 15 to 20 percent, with the highest prevalence among Hispanic children (Story et al. 2006; Dennison et al. 2006; New York City Department of Health and Mental Hygiene 2006; Whitaker and Orzol 2006).
Creative approaches to obesity prevention have begun in Head Start with a program enhancement called I am Moving, I am Learning (IM/IL). This health promotion and obesity prevention enhancement was developed for Head Start programs by Region III’s Head Start Training and Technical Assistance (T/TA) Network, under the leadership of Nancy Elmore, Head Start program manager, Region III; Amy Requa, Pediatric Nurse Practitioner and Region III TA health specialist; and Dr. Linda Carson, Director of the West Virginia Motor Development Center, West Virginia University. This enhancement is intended to increase the amount of time children spend being physically active and to improve the quality of children’s structured movement and food choices. Specifically, IM/IL has three goals:
Physical activity. To increase the quantity of time children spend in moderate to vigorous physical activities during their daily routine in order to meet national guidelines for physical activity.
Structured movement. To improve the quality of structured movement activities, which are intentionally facilitated by teachers and adults. The emphasis is on (1) the integration of movement into existing Head Start teaching practices as well as the children’s daily routines at school and at home; and (2) educating parents and staff about how planned, practiced, structured movement in children stimulates their gross motor development while simultaneously increasing fitness and supporting the maintenance of a healthy weight.
Healthy nutrition. To promote healthy food choices for children each day. In this area, the ultimate behavioral goals might be to promote food consumption patterns that are thought to be most consistent with preventing obesity: consuming age-appropriate portion sizes, increasing the consumption of fruits and vegetables, and decreasing the consumption of sugar-sweetened beverages and foods that provide high energy density and low nutrient density.
Rather than a prescribed stand-alone curriculum, IM/IL offers these goals along with a conceptualized set of potential enhancements that can be used to integrate obesity prevention into existing Head Start routines and practices. Regardless of the curriculum they currently use to provide child development services, individual programs can use IM/IL strategies to develop their own health promotion and obesity prevention enhancements, tailoring them to the specific needs of their children and families and the characteristics and resources of their programs. While some enhancements can involve altering the preschool environment by changing teachers’ classroom practices, others can target the children’s home or neighborhood by educating parents and providing community outreach. The IM/IL approach to obesity prevention recognizes that a young child’s health is affected by what goes on both at the Head Start center and at home, and that these two environments are in turn influenced by the social and structural environment of the community in which they live.
There has been no formal or systematic evaluation of the effectiveness of IM/IL. However, the T/TA contractor in Head Start Region III developed a brief report that summarized anecdotal observations about a previous pilot of the training of IM/IL conducted in 17 Region III Head Start programs (Region III ACF with Caliber 2005). Some programs indicated that they made positive changes in their programs, mostly related to increased movement in the children; however, information about these behaviors was not systematically collected or examined. In addition, programs indicated some success in engaging staff and parents in their local IM/IL activities.
On the basis of this pilot program, Region III’s Head Start managers and T/TA contractor broadened the training of IM/IL in the spring of 2006 to reach 65 additional programs in Region III. Region III purposefully selected these programs based on programmatic strengths suggesting a high probability of being able to successfully implement IM/IL enhancements. The IM/IL training uses a “train-the-trainer” model, which began with a 2.5-day workshop in which the “trainers were trained.” In the spring of 2006, three such training events were held across Region III. The 65 programs sent a team of up to five representatives to one of them. The program representatives may have included the program director and the health manager, as well as the family and community partnerships manager and the child development and education manager. These representatives then returned to their home program to train other staff. Region III covered the costs of the spring 2006 training event, but the programs received no additional funding to implement IM/IL.
The evaluation proposed here is focused on understanding what occurred in each of these 65 Region III Head Start Programs following their spring 2006 IM/IL training event. The Office of Planning, Research and Evaluation (OPRE) in the Administration for Children and Families (ACF) contracted with Mathematica Policy Research, Inc. (MPR) to conduct this two-year evaluation. One of its primary aims is to understand the theories of change that underlie the way in which IM/IL is implemented in local programs. Accordingly, an overarching theory of change has been developed, and it is depicted in the logic model in Figure I.1. The model has five domains: (1) behavioral goals, (2) implementation strategies, (3) program enhancements, (4) intermediate outcomes, and (5) child outcomes. The focus of the evaluation is on domains 1 to 4.
This evaluation is not designed to measure child health outcomes (domain 5), such as children’s height, weight, dietary behavior, or physical activity levels, although it is ACF’s long-term evaluation goal to determine whether IM/IL enhancements are effective in changing child outcomes. To inform further expansions of IM/IL, it would be highly desirable to conduct a formal evaluation of IM/IL effectiveness in changing child outcomes.
ACF has undertaken an implementation study as the first stage of evaluation because successful implementation of the IM/IL enhancements is a necessary condition for these enhancements to be effective in changing outcomes. Currently, there is no information available about which, if any, of the individual IM/IL enhancements created by Head Start programs have been successfully implemented. Assessing implementation before effectiveness is especially important in this evaluation for two other reasons. First, the IM/IL enhancements are not prescribed in the
form of a manual or curriculum. Rather, each Region III program that received training was encouraged to create its own version of IM/IL that was responsive to the program’s context.
Before examining effectiveness, we need first to examine how programs took what they learned in the spring 2006 regional training and implemented IM/IL enhancements in their own programs. A second reason for beginning with an implementation evaluation is that the long-term outcome being targeted—obesity prevention—is new for Head Start programs, which have historically focused primarily on children’s cognitive and social-emotional skills and included a relatively limited focus on health. Despite the importance of normal body weight for children’s health and well-being, most systematic efforts to prevent obesity in children have not been successful (Summerbell et al. 2005), which further highlights the need to assess implementation carefully and to determine whether IM/IL enhancements are reaching children not only during the Head Start day, but at home and in the community.
Thus, in addition to understanding programs’ theories of change and examining intermediate outcomes, the other specific goals of this evaluation are to document (1) how well the train-the-trainer model works in the Head Start context and for the topic area covered; (2) the challenges, barriers, benefits, and opportunities encountered through the IM/IL enhancement; (3) the infrastructure and resources required to sustain IM/IL enhancements; and (4) what final child outcomes might reasonably be measured in a larger study of IM/IL impacts. These evaluation goals are general ones that have been shaped by ACF’s experience with evaluations (see Section I.F). The goals of this evaluation are intended to be general in order to address the wide range of enhancements and levels of implementation that may be encountered across the 65 programs. For example, MPR has developed an overarching theory of change (Figure I.1), and the data collection will help identify the specifics of this logic model as well as any important variations in it. Similarly, a general evaluation goal will be to assess sustainability. In the context of this evaluation, the most promising implementation strategies are those that are sustainable, but the specific markers of sustainability may differ depending on the type of enhancements being implemented. These specific markers might include the following:
Enhancements implemented in the 2006-2007 school year were being carried forward in the 2007-2008 school year.
In the 2007-2008 school year, teachers reported that they had been trained and were still receiving ongoing refresher training and technical assistance to help them implement IM/IL enhancements.
That IM/IL enhancements had ongoing leadership even if prior leaders had left the programs.
Parents reported that they were being reached by the IM/IL enhancements.
IM/IL enhancements involved community partnerships that were sustained between school years.
To address the goals of the evaluation, we will collect data in three stages: (1) an early 2007 survey of program directors from all 65 Region III programs that attended the spring 2006 IM/IL training, (2) spring 2007 telephone interviews with staff in a subset of 30 purposefully selected programs (senior managers responsible for IM/IL implementation and two teachers), and (3) fall 2007 site visits to a subset of 16 purposefully selected programs. We have designed a data collection plan that will minimize respondent burden and maximize our ability to rank and describe the programs’ implementation of IM/IL over time. Table I.1 presents the main features of our design and the timeline for data collection.
TABLE I.1
OVERVIEW
OF PROPOSED STUDY DESIGN AND DATA COLLECTION ACTIVITIES, BY STUDY
STAGE
Study Stage |
Pre Stage 1 |
Stage 1 |
Stage 2 |
Stage 3 |
Main activities |
Analysis of Program Information Report data |
20-minute self-administered questionnaire |
One 1-hour manager, two 30-minute teacher telephone interviews, document review |
Site visits with staff interviews, teacher focus groups, and parent focus groups document review |
Programs |
All Region III |
65 programs trained on IM/IL in Spring 2006 |
Purposeful sample of 30 programs selected based on Stage 1 implementation levela |
Purposeful sample of 16 programs selected based on Stage 2 implementation levelb |
Timing |
February 2007 |
February-March 2007 |
May-June 2007 |
Fall 2007 |
aThe purposefully selected programs will include those with higher levels of implementation (20 programs) and programs with lower levels of implementation (10 programs). For details, see Section II.A.1
bThe purposefully selected programs will include those with some to higher levels of implementation (12 programs) and programs with none to lower levels of implementation (4 programs). For details, see Section II.A.2
Table I.2 shows how each data collection phase and activity is tied to the major research questions/topics, showing these questions/topics addressed in each phase and the number of programs involved. As the evaluation proceeds though these three stages, we will obtain increasingly specific information and focus with greater intensity on (1) the contexts that may affect implementation, (2) engagement in the spring 2006 training of trainers, (3) how and why programs chose to implement and support certain IM/IL enhancements based on their theory of change, (4) whether and how programs put systems in place that will sustain the enhancements, (5) what the implementation challenges and supports are, and (6) assessment of intermediate outcomes.
TABLE I.2
preliminary grid of research topics and data collection actiVITIES, BY STUDY
Stage |
Pre Stage 1 |
Stage 1 |
Stage 2 |
Stage 3 |
||
|
|
|
|
Site Visit Activities |
||
Research Topics |
PIR |
Questionnaire |
Telephone Interviews |
Staff Interviews |
Teacher Focus Groups |
Parent Focus Groups |
1. Contexts across programs that affect implementation |
P |
P |
P |
S |
S |
S |
2. Participation in regional train-the-trainer training |
|
P |
P |
P |
S |
|
3. Implementation of site-level training and theory of change |
|
P |
P |
P |
P |
|
4. Sustainability and internal/external resources |
|
|
S |
P |
P |
S |
5. Challenges and supports |
|
P |
P |
P |
P |
P |
6. Measurable outcomes |
|
|
S |
P |
P |
P |
Note: P indicates a primary data source, and S indicates a secondary data source to address each research question.
Through paper-and-pencil questionnaires with mostly closed-ended items sent to all 65 program directors, we will confirm basic descriptive information about the program context (obtained with permission from the Office of Head Start’s Program Information Report data), capture information that will allow us to describe the range of enhancements that have been initiated, and assess the progress of implementation, including barriers, challenges, opportunities, and ways the spring 2006 training prepared them for implementation (see Appendix A). We plan to send out the questionnaires in early 2007, after we receive clearance from the Office of Management and Budget (OMB). Based on experience with similar questionnaires and pretesting, we expect this questionnaire to take about 20 minutes to complete. We will summarize the survey data using descriptive statistics and report the findings from the survey and document review together with the Stage 2 telephone interview findings in an interim memorandum. The questionnaire will provide the broadest measure of implementation across all 65 programs and will enable us to identify what proportion of programs (1) implemented no or very few IM/IL enhancements, (2) implemented some enhancements, or (3) implemented many enhancements.
Through telephone interviews with a subset of 30 purposefully selected programs1 (one interview per program with the lead manager responsible for IM/IL implementation and two interviews per program with teachers), we will learn about barriers and facilitating factors in both high- and low-implementing programs as well as intermediate outcomes. We expect that the manager interview will take about 60 minutes to complete and that each teacher interview will take 30 minutes (see Appendix B). We plan to conduct the telephone interviews in spring 2007. In preparation for each telephone interview, interviewers will review Program Information Report (PIR) data and Stage 1 survey data to abstract basic information about the program and its implementation status, such as whether the program conducted staff training, changed any policies related to food offered to children or time scheduled for outdoor play, or purchased any resources or new equipment to support implementation. We will also request that programs send MPR relevant documents (either electronically or with a prepaid mailer), including action plans developed during or after the training, training and technical assistance plans, daily schedules, policies related to encouraging children’s physical activity and good nutrition, and forms used to track enhancement implementation (we expect document gathering and sending to take about 30
minutes). For writing up notes from the telephone interviews, we will use a standard format that will serve as a basis for brief site profiles and cross-site analyses to be included in the interim memorandum (see Appendix B). The memorandum will provide the OHS, Region III, and OPRE with information they can use to guide additional training and technical assistance efforts focused on supporting the IM/IL enhancements programs develop. It will also provide implementation lessons the government can use to guide decisions about how to use the spring 2006 train-the-trainer approach in other regions.
In fall 2007, we will conduct site visits to 16 of the 30 programs interviewed in Stage 2.2 The analysis of Stage 1 and 2 data will inform practices in the Stage 3 site visits. While keeping the total burden hours the same, adjustments to the approach may be made such as visiting fewer programs for more in-depth case studies. Each visit will last about 1.5 days. We expect that staffing configurations will vary across programs, and the number and type of staff we interview will probably also vary. Nevertheless, we have identified two types of respondents we expect to interview: (1) program directors; and (2) key staff who work on the IM/IL enhancement, such as the health services manager, the education manager, and family service workers. In addition to interviews with these people, site visitors will conduct a focus group with teachers/home visitors and a focus group with parents at each site. We expect that each teacher/home visitor focus group will have 4 to 10 participants, depending on the size of the program, and that each parent focus group will have 8 to 12 members. To conduct the site visit activities, we will use semistructured discussion guides for each type of informant we talk to on site (see Appendix C). For writing up notes from the site visits, we will use a standard format that will serve as a basis for updating the Stage 2 brief site profiles and conducting the cross-site analyses to be included in the final report (Appendix C).
To examine key intermediate outcomes at the classroom and program levels, we will also conduct an observation of one purposefully selected classroom per program and we will review the menus and written policies of each program in regard to minimum daily MVPA, amount of outdoor play time, and the types of snacks and treats families can send from home. Appendix C includes a list of the constructs we will cover as part of the observation.
As indicated in Table I.2, the three key overarching research questions being addressed in the site visits are (1) whether and how programs put systems in place that will sustain the enhancements, (2) what the implementation challenges and supports are, and (3) how programs have focused their efforts on intermediate and long-term outcomes. Below are some specific examples for each of these goals:
The timing of the site visits in the fall of 2007 will allow us to address key aspects of sustainability. With this timing, we can examine what the programs do to keep the IM/IL enhancements going into the next program year. In addition, we can potentially arrange the timing of particular site visits in the fall of 2007 to observe training of new staff and/or refresher training of previously trained staff.
In the focus groups with teachers and parents, we can identify different challenges and supports than we might identify from the Stage 2 phone interviews in which we talk to managers and one or two teachers.
As part of the classroom observations, menu, and written policy review, we will document how consistent classroom-level and program-level implementation is with the overall goals of IM/IL.
Our interview and focus group protocols specifically ask respondents about their perceptions of changes in their own and the program’s approach to the three IM/IL goals. In this way we will assess program efforts, successes, and challenges in affecting intermediate outcomes.
Any subsequent evaluation of the effectiveness of IM/IL would require identifying a set of measures that were viewed as feasible by both teacher and parents. These are data we will obtain from the focus groups.
We will take several steps to ensure consistent, high-quality data collection across programs and data collection stages. MPR will work with OPRE, the OHS, and Region III to prepare advance letters that encourage programs to participate in the Stage 1 survey (see Appendix A). We will send the questionnaires to programs by express mail, which attracts attention when the envelope arrives and also allows us to track receipt and delivery back to us. We will enclose a postage-paid return envelope. During the Stage 1 questionnaire data collection period, we will offer respondents a toll-free help line and monitor any calls to it daily. We will use e-mail to encourage slow responders to complete and return their questionnaires. If necessary, we will call nonrespondents and offer to help them complete the questionnaire over the telephone.
Before conducting the Stage 2 telephone interviews and the Stage 3 site visits, we will provide comprehensive training for the interview and site visit teams to review the study’s objectives, the research design, and the data collection procedures. After conducting an initial set of interviews and site visits, we will reconvene our team to debrief them, discuss any issues that have come up, and ensure that MPR staff are following consistent procedures. In addition, senior team members will review and provide feedback on notes from initial interviews and site visits.
This study will inform staff in OPRE, the OHS, ACF Region III, and Head Start programs in Region III and across the nation. Policymakers and program administrators will use findings from this evaluation to shape future initiatives that aim to promote health and prevent obesity among Head Start children and other low-income preschoolers. In addition, the information will be useful for Head Start program operators and technical assistance providers as they seek to improve their efforts to prevent childhood obesity by promoting physical activity and healthy eating habits among Head Start children and their families.
The study will not use any automated methods to collect data. Because of the nature of the data to be collected, these techniques would not be suitable.
There are few existing sources of implementation information about the IM/IL enhancement beyond an anecdotal summary report on a pilot program conducted in 17 Head Start programs 2004 (Region III ACF with Caliber 2005). The summary report, developed by the Region III T/TA contractor, was not based on a systematic data collection. Because this is a new initiative that has not been evaluated by a third party, there is no existing source that contains the kinds of information the implementation evaluation of the Region III IM/IL enhancement will provide.
The information that will be requested from program staff is the minimum required to meet the study objectives. Efforts have been made to minimize respondent burden at each stage of data collection. For example, the Stage 1 questionnaire uses check boxes and closed-ended responses wherever possible to avoid the need for burdensome expository responses. In Stages 2 and 3, MPR staff will contact Head Start program directors in advance to explain the purpose of the data collection and identify optimal dates and times for the interviews and visits. For the Stage 2 telephone interviews with 30 programs, we will provide Head Start directors/managers with alternative dates and allow them to select those most convenient for program staff. Following the initial contact, we will send the program director a letter that details what we hope to accomplish during the telephone interviews, the names of those we need to interview, and the approximate time needed for each interview. For the Stage 3 site visits to 16 programs, we will also provide instructions about whom to include in focus groups (which staff members and families) and the time needed to conduct such groups.
The implementation evaluation of IM/IL will provide information needed for Technical Assistance and Training, improving the train-the-trainer model, and policy decisions. Implementation evaluation was chosen for many reasons as a first step in evaluating the IM/IL program. Lipsey and Cordray (2000) suggest that program effectiveness is influenced by inconsistent and incompletely delivered services and differential subgroup and program utilization. Assessing implementation variation and service delivery not only assists in future evaluations of program effectiveness, but can provide information for program improvement at earlier stages in the evaluation process (Lipsey and Cordray 2000). In addition, Gilliam et al. (2000) have concluded that “outcome evaluations should not be attempted until well after quality and participation have been maximized and documented in a process evaluation. Although outcome data can determine the effectiveness of a program, process data determine whether a program exists in the first place.”
The Office of Special Education Programs examined the replicability of the Parents Interacting With Infants (PIWI) program using implementation research (McCollum et al. 2007). The PIWI program is similar to IM/IL in that it is theory-based and is not proscriptive in providing for community adaptation. Differences between IM/IL and PIWI programs are exemplified in the extensive training and ongoing onsite support for implementation (10 to 12 months) for PIWI. The PIWI implementation evaluation resulted in case studies of two low and two high program implementers. The authors created a list of characteristics of high-fidelity programs, including program/community context (middle to upper income), service delivery (consistent parent-child dyads during set of group sessions), staff characteristics and skills (stable staff), and program culture (PIWI model selected as a means to focus on and support parent-child relationships). However, many of these characteristics noted are inappropriate to generalize to Head Start (such as middle- to high-income participants), and others were specific to coordinating parent-child play groups for children with disabilities and not related to train-the-trainer models for Head Start teachers to implement.
In an evaluation of infant-mental-health-based quality improvement models within Early Head Start, researchers noted that staff buy-in was one of the most important variables for implementation (Brophy-Herb et al. 2001). Without buy-in, the staff prioritized the intervention as low on their list of daily tasks, and therefore implementation and data collection suffered. This study’s initial implementation information and resulting data have led to changes in measurement in future rounds of the evaluation. The Free to Grow Project is a substance abuse prevention program, conducted in both schools and Head Start settings that included both a two-year development and three-year implementation phase (Harrington 2001). The Free to Grow Project was theoretically grounded and had good grassroots organization. However, the organization was not sufficient to overcome the specific challenges of working within the Head Start community that hindered the Free to Grow implementation, including performance of programs and leadership changes within Head Start centers. The lessons learned for future intervention within Head Start were to focus on the importance of characteristics of the community, grantee, and Head Start center.
An implementation evaluation was conducted for the Early Head Start program. Findings indicate that implementation varied across programs (Administration for Children, Youth and Families 1999 and 2000). This variability was further examined in the Early Head Start Impact study, which followed the implementation evaluation. The findings indicate that program impacts varied by level of implementation and that programs that fully implemented the EHS comprehensive program standards had a stronger pattern of impacts for children and families than those that did not implement at similar levels (Administration for Children and Families 2002). Without the implementation evaluation, these differential impacts would not have been found.
Although ACF and others have conducted previous implementation evaluations, the lessons learned are not fully transferable to this implementation evaluation of IM/IL, which is unique. IM/IL enhancements are designed by the individual Head Start center to correspond with the community they serve, and there have not been any data collected on the type of enhancements programs have created. The data collected in this evaluation are critical to our understanding of how Head Start programs implement enhancements and strategies promoted in the IM/IL training and what type of foundation these enhancements provide for obesity prevention efforts.
If these data were not collected, we would not be able to describe the implementation successes and challenges that the programs experience in trying to implement IM/IL or how the IM/IL enhancements are sustained and evolve over time. Moreover, without these data, OPRE, the Office of Head Start, and Region III would not be able to provide guidance to other Head Start programs about how to implement strategies that show promise for being replicable and sustainable. For example, the lessons learned from this implementation evaluation will be applied to those providing training and technical assistance to other programs implementing IM/IL. Since little is known about preschool obesity prevention efforts in Head Start, the study will make a contribution to the field and to the existing body of research.
There are no special circumstances.
The initial Federal Register announcement was printed on June 7, 2006, in volume 71, no. 109, p. 32967. The second notice was printed on November 8, 2006, in volume 71, no. 216, pp. 65531-32. For additional information, see the OS certification statement.
People outside ACF who have been consulted on the feasibility of this study and the availability of data sources are:
Louisa Tarullo, Mathematica Policy Research, Inc.
Mary Kay Fox, Mathematica Policy Research, Inc.
Christine Ross, Mathematica Policy Research, Inc.
Mary Story, University of Minnesota
Russell Pate, University of South Carolina
There will be no payments or gifts to programs for participating in Stage 1 and Stage 2. For the 16 programs selected to participate in the Stage 3 site visits, we plan to provide a gift of educational or classroom supplies valued at $50. MPR will pay each family $20 for participation in the site visit focus groups. This $20 should cover the cost of travel and other expenses incurred to attend the focus group. Where possible, we will arrange to provide on-site child care during the parent focus group.
This study is being conducted in accordance with all relevant regulations and requirements, including the Privacy Act of 1974 (5USC 552a), the Privacy Act Regulations (34 CFR Part 5b), and the Freedom of Information Act (5 CFR 552) and related regulations (41 CFR Part 1-1, 45 CFR Part 5b, and 40 CFR 44502). As part of the introduction to each data collection instrument, we clearly state that none of the information respondents provide will be used for monitoring or accountability purposes and that the results of the study will be presented in aggregate form only.
ACF does not have the statutory authority to provide assurances of confidentiality. Therefore, the term “confidential” is not used. Instead, we will use the expression “private to the extent permitted by law” in all evaluation protocols.
MPR routinely uses the following safeguards to carry out data security:
All employees at MPR sign a confidentiality pledge that emphasizes the importance of confidentiality and describes their obligations.
Identifying information will be maintained in the database in separate tables, which are linked to the data entry screens only by sample identification number.
Access to the file linking sample identification numbers with identifying information will be limited to a small number of people with a need to know this information.
Access to hard-copy documents will be strictly limited. Documents are stored in locked files and cabinets. Discarded material is shredded.
Audiotapes of focus group discussions will be destroyed after data analysis is complete.
Computer files will be protected with passwords, and access will be limited to specific users. Especially sensitive data are maintained on removable storage devices that are kept physically secure when not in use.
The contract stipulates that the ACF owns all data collected in this ICR. MPR will code and clean all data. All data will be stripped of any identifying information before it is transmitted to ACF.
We are not collecting any sensitive data. We will ask respondents about the characteristics and needs of the children and families they serve, the services being provided through the Head Start IM/IL enhancement, and their experiences implementing the enhancement. None of these questions is considered sensitive.
Table I.1 provides a summary of the number of respondents for each information collection, estimated response time per respondent, and total response time. We estimate that in addition to the director’s survey and interview completion time described in this package, it will take directors about 30 minutes to locate and send us requested documents (for example, T/TA plans and forms used to track IM/IL activities). That time is included in Table I.1 as part of the average burden per response for the directors. We estimate the total study burden of responding to the questionnaire, telephone interviews, and site visit interviews, and participating in the focus groups to be 593.6 hours in 2007. We base our time estimates for the questionnaire, telephone interviews, and site visit activities on our experience using similar survey instruments in the Study of Early Head Start Programs; similar telephone interviews in the Head Start Training and Technical Assistance Quality Assurance Study and the Early Head Start Enhanced Home Visiting Pilot Project; and similar protocols for site visits in the national evaluation of Early Head Start, the evaluation of the Early Head Start Enhanced Home Visiting Pilot Project, and the Head Start Training and Technical Assistance Quality Assurance Study.
Neither programs implementing IM/IL nor parents invited to focus groups will incur any costs for participating in evaluation activities. They will not be asked to keep any records for the evaluation.
If programs are unable to send us the documents we request electronically, we will include prepaid mailers for them to send hard copies to us. If there are many documents to copy, we will ask them to mail them to MPR, we will copy them, and we will send them back. We have used this approach successfully in other studies.
The estimated cost to the federal government through May 2008 of the Region III IM/IL evaluation—including designing and administering the data collection instruments; collecting, processing, and analyzing the data; and preparing reports summarizing the results—is $592,442, or $296,221 per year. This estimate is based on MPR’s experience managing data collections of this type.
This is a new data collection.
As part of this data collection, we will produce an interim memorandum, a final report, and two issue briefs. The interim memorandum, due in June 2007, will focus on findings from the Stage 1 questionnaire completed by program directors and the Stage 2 telephone interviews conducted with program directors and teachers/home visitors. The first issue brief, which will focus on presenting the interim findings to practitioners, is due in September 2007. The final report, to be completed in March 2008, will summarize findings from all the data collection activities—including the Stage 1 questionnaire, Stage 2 telephone interviews with directors and teachers/home visitors, and Stage 3 site visits to 16 programs. The second issue brief, which will focus on presenting the final report findings to practitioners, is due in May 2008. To supplement dissemination of the reports, MPR staff will also seek to present their research at professional conferences. With approval from ACF, we will submit our research for consideration at the Biannual Head Start Research Conference and other relevant professional meetings.
Our descriptive analysis using data collected from the program directors/lead managers for IM/IL in spring 2007 will focus on describing how programs are implementing IM/IL enhancements. We will also determine what barriers affected implementation. We will prepare tables to present frequency distributions and means for items across all programs and within key program and demographic subgroups. In addition to the items that measure level of implementation, we will analyze descriptive information about program approach, enrollment, and characteristics of children and families served (abstracted from PIR data). Other contextual items of interest include urbanicity, state, and engagement of community partners.
One important use of these data is for the purposeful selection of 30 programs for the Stage 2 evaluation. As detailed later in Section II, we will use data from question C9.
C9. There are many challenges your program may have faced while trying to implement IM/IL activities. How would you rate the success of your program in implementing the following on a scale from 1 to 5, where 1 is "not at all successful" and 5 is "extremely successful"?
|
MARK ONLY ONE NUMBER IN EACH ROW |
||||
|
Not At All Extremely Successful Successful |
||||
|
|
||||
a. Moderate to vigorous physical activity |
1 |
2 |
3 |
4 |
5 |
b. Structured movement experiences |
1 |
2 |
3 |
4 |
5 |
c. Healthy nutrition choices |
1 |
2 |
3 |
4 |
5 |
d. IM/IL overall |
1 |
2 |
3 |
4 |
5 |
Programs will be assigned a value from 1 through 5 based on their response to question C9D. Respondents will not be asked this question if they indicate in question C1 that they have not tried to implement any IM/IL activities. These programs will be assigned a value of 1 for this question.
Because of the large number of program sites in the evaluation, we will use a qualitative analysis software package, Atlas.ti (Scientific Software Development 1997), to organize and code the data collected during the telephone interviews and site visits. This software will allow the research team to use a structured coding system for organizing and categorizing data, entering the data into a database according to the coding scheme, and retrieving data that are linked to primary research questions. After the telephone interview and site visit information is coded, data can be retrieved from this system on particular research questions across all sites or individual respondents within sites, as well as by type of respondent (for example, program director, health manager, or teacher). This approach will facilitate examination of how programs vary in their program models, enhancement strategies, community partnerships, implementation successes and challenges, and other program features.
All study materials will display the OMB number and expiration date.
No exceptions to the certification statement are requested.
The small and purposefully selected (i.e., nonrandom) sample of 65 programs that were chosen by Region III to participate in the spring 2006 IM/IL training and that will participate in this evaluation will not permit us to generalize our findings to other Head Start programs or even to other programs in Region III. In addition, with this small, nonrandom sample of programs, we cannot apply statistical methods to select subsamples of programs for more intensive case studies. Therefore, we propose an approach to selecting subsamples for Stages II (30 programs) and III (16 programs) of the evaluation that is purposive and attempts to describe the overall experience of the programs that attended spring 2006 IM/IL trainings. These 65 programs operate in a variety of contexts, and our plan for selecting purposive subsamples aims to represent those varied contexts so that we can provide information that is useful to a variety of other programs in trying to implement IM/IL.
It is not feasible with the small number of programs in this evaluation to stratify on multiple variables in making the selection of the 30 programs for Stage 2 of the evaluation. Therefore, we have selected a single stratifying characteristic—program size (“small” versus “large”). ACF has purposefully selected program size as the stratifying characteristic to ensure that large programs are included in the evaluation. There are very few large grantees; usually such grantees are from cities, and ACF is interested in how implementation might differ by grantee size.
We are especially interested in this stratification, because program size may modify the relationship between the spring 2006 IM/IL training and the process of implementation. By stratifying on program size, we are not inferring that we will be able to make separate conclusions about large and small programs, much less be able generalize those conclusions to other programs. Rather, the stratification is an effort to observe potential differences by program size, if they exist, that might otherwise be missed without stratification. Such differences may be meaningful in providing technical assistance to other programs implementing IM/IL.
After first stratifying the programs based on their size, we will select 30 programs for phone interviews—20 “higher-implementing” and 10 “lower-implementing” programs. We will stratify the programs based on size (number of children served), using data from the PIR. We will divide the programs into two groups, “large” and “small,” based on a median split. Once they are separated into the two groups, we will select the 10 highest- and the 5 lowest-implementing programs. We will determine a program’s success in implementation by a ranking of programs based on responses to questionnaire item C9D.
C9. There are many challenges your program may have faced while trying to implement IM/IL activities. How would you rate the success of your program in implementing the following on a scale from 1 to 5, where 1 is "not at all successful" and 5 is "extremely successful"?
|
MARK ONLY ONE NUMBER IN EACH ROW |
||||
|
Not At All Extremely Successful Successful |
||||
|
|
||||
a. Moderate to vigorous physical activity |
1 |
2 |
3 |
4 |
5 |
b. Structured movement experiences |
1 |
2 |
3 |
4 |
5 |
c. Healthy nutrition choices |
1 |
2 |
3 |
4 |
5 |
d. IM/IL overall |
1 |
2 |
3 |
4 |
5 |
Programs will be assigned a value from 1 through 5 based on their response to question C9D. Respondents will not be asked this question if they indicate in question C1 that they have not tried to implement any IM/IL activities. These programs will be assigned a value of 1 for this question. If the 10 highest- and 5 lowest-implementing programs cannot be selected because of ties in program rank, a re-ranking of tied programs can be established based on the responses to questions C9A, C9B, and C9C (higher-implementing programs being ones with higher total scores across these three other items).
At the time of the director/manager interview, we will request a list of all centers/classrooms/teachers where IM/IL is being implemented. This list is for the purpose of our randomly selecting two teachers per program to be interviewed to get the teacher perspective about how IM/IL is being implemented in the classroom. In making a random selection of teachers at each site, we recognize that the programs themselves have not been randomly selected, nor have the classrooms been randomly selected to implement IM/IL. Nevertheless, this approach to teacher selection will help ensure that we interview teachers with a range of implementation experiences.
We will combine data from Stage 1 and Stage 2 to develop logic models for the 30 programs interviewed at Stage 2. We expect to find commonalities across the logic models that will allow us to group programs by their overarching approaches and theories of change.
We will purposefully select the 16 programs for site visits based on data obtained from the telephone interviews in Stage 2 that will allow us to examine the logic model that guides the programs’ IM/IL implementation (see Figure 1 and Section I. A.3). For each program, we will create a logic model using information derived from structured questions about each of the five domains in the logic model. From these models, we will attempt to develop one to three general logic models that best explain the theories of change used across programs. It is possible that some programs will have difficulty describing their theory of change, but our interviewers will work carefully with them to capture their approach. Based on the logic models that are created in Stage 2, we will select 12 higher-implementing programs for Stage 3 that represent programs from each of the major logic models, if more than one exists.
We will also choose 4 programs from the lower-implementing group. We anticipate that implementation will occur along a spectrum. In programs at the lowest end of the spectrum of implementation, little may be accomplished. For example, at the lowest end would be any programs that indicated in the survey question C1 that they have not tried to implement any IM/IL activities. In selecting four programs from the lower-implementing group, we would avoid a site visit to any programs if we felt that we could gain little information beyond what we had learned from the phone interviews. So, for example, if teachers were not trained or enhancements had not reached parents, it would not be an efficient use of project resources to conduct sites visits in which a major focus would be teacher and parent focus groups. We will have information about characteristics of these programs from the survey and telephone interviews, however.
To recruit participants for the teacher/home visitor focus groups, we will ask program directors to invite all lead teachers and assistant teachers (or home visitors if it is an Early Head Start program) implementing IM/IL in their largest location to participate in two separate focus groups. Given schedules and the need to cover teacher time out of the classroom, on average, we expect to speak with five teachers/home visitors per site. To recruit participants for the parent focus groups, we will ask programs to invite 20 parents to participate, twice as many as we expect to attend the focus group (we will ask them to invite parents who live within a 45-minute commute to the focus group location). Accounting for refusals and no-shows, we expect that selecting and contacting 20 parents for each group will yield 10 participants, on average.
All 65 Head Start programs that were invited by Region III to participate in a spring 2006 IM/IL training will be part of this evaluation. Because of the purposive manner in which these programs were selected and the small number of programs being evaluated, we will not be able to apply statistical methods to our data analysis. Although the programs will differ in their geographic context and target populations and implementation strategies, we will not have a large enough sample to compare subgroups based on program characteristics.
We expect that the questionnaires will be completed by 85 percent of the sample in early 2007. This estimated completion rate is based on our experience conducting other surveys in Head Start and Early Head Start (such as the Survey of Early Head Start Programs described below). There are several circumstances that favor obtaining a high response rate on this survey: (1) endorsement of the survey by ACF, as evidenced by a cover letter from the Director of Head Start encouraging participation in the survey (see Appendix A); (2) a well-defined population of respondents with up-to-date contact information; and (3) a questionnaire designed with consideration of respondent burden insofar as the questionnaire is of reasonable length, does not ask for information that can be obtained from other sources (such as the PIR), and contains questions that the respondent can answer without investing additional time in record searching. We recently completed a more complex and longer survey of the program directors of all 748 Early Head Start programs and achieved a completion rate of 88 percent.
Based on our previous work, we also expect a high item response rate—more than 90 percent—for all the questions, because no item is sensitive in nature and all items will be relevant to the staff in the programs. As part of fielding our questionnaire, we will send an advance letter to all programs advising them of the survey to come, the topics that it will cover, and the importance of participation. Next, we will mail self-administered questionnaires to programs and include a prepaid self-addressed envelope to return the completed questionnaire. We will rely on e-mail and telephone followup to remind participants to complete the questionnaires and, if applicable, to return them to MPR. To ensure an appropriate response rate, staff at MPR will, if necessary, administer the questionnaire to participants by telephone.
We expect that all programs selected to participate in Stage 2 and Stage 3 data collection will agree to participate. Our experience with other evaluations of Head Start initiatives indicates that participation rates are typically close to 100 percent (for example, all 69 of the Head Start and Early Head Start programs invited to participate in interviews and site visits for the Head Start Training and Technical Assistance Quality Assurance Study agreed to participate). To help ensure high rates of participation, we will coordinate with the programs to determine convenient dates for telephone interviews and visits. We will mail or fax materials to all programs in advance explaining the purpose of the study and the main topics to be discussed during the interviews. In addition, during the site visits, to make it easier for staff to respond, site visitors will refine the questions so that they are applicable to the program and the role of the respondents being interviewed.
Recruitment of participants for the parent focus groups will require close coordination between the research team and program staff. We will ask program directors at the selected programs to designate staff to help us recruit parents for participation in the focus group. We will discuss the scheduling of the focus groups with site staff. In many sites, we expect to conduct the parent focus groups in the evening to accommodate parents’ work schedules. In addition, as stated in Section I.I, we will pay $20 to each family that participates.
The questionnaire, telephone interview guides, and site visit interview guides draw heavily on instruments and protocols that were developed for site visits to Head Start and Early Head Start programs for other studies, including the National Evaluation of Early Head Start (Administration for Children and Families 2002), the Early Head Start Enhanced Home Visiting Pilot Evaluation (Paulsell et al. 2006), the Early Head Start Fatherhood Demonstration (Bellotti et al. 2003), the Head Start National Reporting System Quality Assurance Study (Paulsell et al. 2004), the Study of Early Head Start Programs (Vogel et al. 2006), the Head Start Training and Technical Assistance Quality Assurance Study (Rosenberg et al. 2006), and the Head Start Oral Health Initiative Evaluation (Paulsell et al. 2006). We developed the questionnaire using our usual approach to questionnaire design, and we made modifications to telephone interview and site visit protocols based on the specific objectives of this evaluation and on our experience using them in previous studies. We also received extensive input on the questionnaire and protocols from OPRE, OHS, and Region III staff members. A former Head Start director who is currently at the OHS completed the Stage 1 questionnaire as if she was a director who attended the training. We incorporated her feedback as well.
No one beyond the study team and ACF was consulted on the statistical aspects of the design.
Administration for Children and Families. “Pathways to Quality and Full Implementation in Early Head Start Programs.” Washington, DC: U.S. Department of Health and Human Services, 2002.
Administration for Children and Families. Making a Difference in the Lives of Infants and Their Families: The Impacts of Early Head Start. Washington, DC: U.S. Department of Health and Human Services, 2002.
Administration for Children, Youth and Families. Leading the Way: Characteristics and Early Experience of Selected Early Head Start Programs. Volume III: Program Implementation. Washington, DC: U.S. Department of Health and Human Services, 2000.
Administration for Children, Youth and Families. Leading the Way: Characteristics and Early Experience of Selected Early Head Start Programs. Volume II: Program Profiles. Washington, DC: U.S. Department of Health and Human Services, 2000.
Administration for Children, Youth and Families. Leading the Way: Characteristics and Early Experience of Selected Early Head Start Programs. Volume I: Cross-Site Perspectives. Washington, DC: U.S. Department of Health and Human Services, 2000.
Bellotti, Jeanne, Cheri Vogel, Andrew Burwick, Charles Nagatoshi, Melissa Ford, Barbara Schiff, and Welmoet Van Kammen. “Dedicated to Dads: Lessons Learned from the Early Head Start Fatherhood Demonstration.” Princeton, NJ: Mathematica Policy Research, Inc., 2003.
Brophy-Herb, H., R. Schiffman, L. McKelvey, M. Cunningham-DeLuca, and M. Hawver. “Quality Improvement: Lessons Learned from an Infant Mental Health-Based Early Head Start Program.” Infants and Young Children, vol. 14, no. 2, 2001, pp. 77-85.
Dennison, B. A., L. S. Edmunds, H. H. Stratton, and R. M. Pruzek. “Rapid Infant Weight Gain Predicts Childhood Overweight.” Obesity (Silver Spring), vol. 14, no. 3, 2006, pp. 491-99.
Freedman, D. S., C. L. Shear, G. L. Burke, S. R. Srinivasan, L. S. Webber, D. W. Harsha, and G. S. Berenson. “Persistence of Juvenile-Onset Obesity over Eight Years: The Bogalusa Heart Study.” American Journal of Public Health, vol. 77, no. 5, 1987, pp. 588-92.
Gilliam, W. S., C. H. Ripple, E. F. Zigler, and V. Leiter. “Evaluating Child and Family Demonstration Initiatives: Lessons from the Comprehensive Child Development Program.” Early Childhood Research Quarterly, vol. 12, no. 1, 2000, pp. 41-59.
Gunnell, D. J., S. J. Frankel, K. Nanchahal, T. J. Peters, and G. D. Smith. “Childhood Obesity and Adult Cardiovascular Mortality: A 57-Y Follow-up Study Based on the Boyd Orr Cohort.” American Journal of Clinical Nutrition, vol. 67, 1998, pp. 1111-18.
Harrington, M. Evaluation of Free to Grow, Phase II: Detailed Profile of the Free to Grow Project in California. Final Report. Princeton, NJ: Mathematica Policy Research, 2001.
Latner, J. D., and A. J. Stunkard. “Getting Worse: The Stigmatization of Obese Children.” Obesity Research, vol. 11, no. 3, 2003, pp. 452-56.
Lipsey, M. W., and D. S. Cordray. “Evaluation of Methods for Social Intervention.” Annual Review of Psychology, vol. 51, 2000, pp. 345-75.
McCollum, J. A., T. Yates, B. Laumann, and W. Hsieh. “Replicating a Parent-Child Group Model: Case Analysis of High- and Low-Fidelity Implementers.” Infants & Young Children, vol. 20, no. 1, 2007, pp. 38-54.
New York City Department of Health and Mental Hygiene. “Obesity in Early Childhood.” NYC Vital Signs, pp. 1-2. Available at [www.nyc.gov/html/doh/downloads/pdf/survey/survey-2006childobesity.pdf]. Accessed May 1, 2006.
Ogden, C. L., K. M. Flegal, M. D. Carroll, and C. L. Johnson. “Prevalence and Trends in Overweight among US Children and Adolescents, 1999-2000.” Journal of the American Medical Association, vol. 288, no. 14, 2002, pp. 1728-32.
Olshansky, S. J., D. J. Passaro, R. C. Hershow, J. Layden, B. A. Carnes, J. Brody, L. Hayflick, R. N. Butler, D. B. Allison, and D. S. Ludwig. “A Potential Decline in Life Expectancy in the United States in the 21st Century.” New England Journal of Medicine, vol. 352, no. 11, 2005, pp. 1138-45.
Paulsell, Diane, Debra Mekos, Patricia Del Grosso, Cassandra Rowan, and Patti Banghart. “Strategies for Supporting Quality in Kith and Kin Child Care: Findings from the Early Head Start Enhanced Home Visiting Pilot Evaluation.” Draft report submitted to the U.S. Department of Health and Human Services, Office of Head Start. Princeton, NJ: Mathematica Policy Research, Inc, July 2006.
Paulsell, Diane, Heather Zaveri, Beth Zimmerman, Laura Hawkinson, Anne Hopewell, and Patricia Del Grosso. “Study Design of the Head Start Oral Health Initiative.” Report submitted to the U.S. Department of Health and Human Services, Office of Planning, Research, and Evaluation. Princeton, NJ: Mathematica Policy Research, Inc, May 2006.
Paulsell, Diane, Linda Rosenberg, Renee Nogales, Charles Nagatoshi, Susan Sprachman, Louisa Tarullo, and John Love. “Meeting the Challenge: How Head Start Programs Implemented the National Reporting System.” Report submitted to the U.S. Department of Health and Human Services, Head Start Bureau. Princeton, NJ: Mathematica Policy Research, Inc., December 2004.
Region III Administration for Children and Families, with Caliber. “I Am Moving, I Am Learning: A Proactive Approach for Addressing Childhood Obesity in Head Start Children, Summary Report and Opportunities for Replication.” Philadelphia, PA: Region III ACF, 2005.
Rosenberg, Linda, Kimberly Boller, Shefali Pai-Samant, Krisztina Marton, Susan Sprachman, and Peter Liu. “Head Start Training and Technical Assistance Quality Assurance Study: A Design Report.” Princeton, NJ: Mathematica Policy Research, Inc., April 2006.
Schwimmer, J. B., T. M. Burwinkle, and J. W. Varni. “Health-Related Quality of Life of Severely Obese Children and Adolescents.” Journal of the American Medical Association, vol. 289, no. 14, 2003, pp. 1813-19.
Scientific Software Development. Atlas.ti: Visual Qualitative Data Analysis Management Model Building in Education Research and Business. Berlin, Germany: Scientific Software Development, 1997.
Sherry, B., Z. Mei, K. S. Scanlon, A. H. Mokdad, and L. M. Grummer-Strawn. “Trends in State-Specific Prevalence of Overweight and Underweight in 2- through 4-Year-Old Children from Low-Income Families from 1989 through 2000.” Archives of Pediatrics & Adolescent Medicine, vol. 158, no. 12, 2004, pp. 1116-24.
Story, M., K. M. Kaphingst, and S. French. “The Role of Child Care Settings in Obesity Prevention.” Future Child, vol. 16, no. 1, 2006, pp. 143-68.
Strauss, R. S., and H. A. Pollack. “Social Marginalization of Overweight Children.” Archives of Pediatrics & Adolescent Medicine, vol. 157, no. 8, 2003, pp. 746-52.
Summerbell C. D., E. Waters, L. D. Edmunds, S. Kelly, T. Brown, K. J.
Campbell. “Interventions for Preventing Obesity in Children.”
Cochrane Database of Systematic Reviews, vol. no. 3, 2005, pp.
CD001871. Available at
www.mrw.interscience.wiley.com/
cochrane/clsysrev/articles/CD/frame.html.
Accessed on February 19, 2007.
Vogel, Cheri, Nikki Aikens, Andrew Burwick, Laura Hawkinson, Angela Richardson, and Linda Mendenko. “Findings from the Survey of Early Head Start Programs: Communities, Programs, and Families.” Final report. Princeton, NJ: Mathematica Policy Research, Inc., November 2006.
Whitaker R. C., S. M. Orzol. "Obesity among US Urban Preschool Children: Relationships to Race, Ethnicity, and Socioeconomic Status." Archives of Pediatrics & Adolescent Medicine, vol. 160, no. 6, 2006, pp. 578-84.
Whitaker, R. C., J. A. Wright, M. S. Pepe, K. D. Seidel, and W. H. Dietz. “Predicting Obesity in Young Adulthood from Childhood and Parental Obesity.” New England Journal of Medicine, vol. 337, no. 13, 1997, pp. 869-73.
1 The 30 will be purposefully selected from the subset of the programs that responded to the Stage 1 questionnaire (65 minus those who did not respond). See Section II.A.1 for details about site selection.
2 The 16 will be purposefully selected from the subset of programs that participated in the Stage 2 telephone interviews. See section II.A.2 for details about site selection.
File Type | application/msword |
File Title | MEMORANDUM |
Author | Diane Paulsell |
Last Modified By | USER |
File Modified | 2007-02-26 |
File Created | 2007-02-26 |