HSConnects_Descriptive_SSB_v9_7.15.22

HSConnects_Descriptive_SSB_v9_7.15.22.docx

OPRE Research Study: Head Start Connects: A Study of Family Support Services

OMB: 0970-0538

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes







Head Start Connects: A Study of Family Support Services



OMB Information Collection Request

0970 – 0538





Supporting Statement

Part B



JULY 2022








Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers: Sarah Blankenship, Amanda Clincy Coleman, and Paula Daneri


Part B


B1. Objectives

Study Objectives

The objective of this information collection (IC) is to build knowledge about how Head Start programs (Head Start or Early Head Start grantees, delegate agencies, and staff) coordinate family support services for parents/guardians, the characteristics of Head Start programs and staff involved in family support services coordination, and ideas for innovating and improving how Head Start programs coordinate family support services. This is the final phase of the Head Start (HS) Connects Study.1 This research project is sponsored by the Office of Planning, Research, and Evaluation (OPRE) through a contract with MDRC and its subcontractors, MEF Associates and NORC at the University of Chicago.



Generalizability of Results

The study is intended to produce nationally-representative estimates of how Head Start center-based programs coordinate family support services for parents/guardians, and of the characteristics of these Head Start programs and the family support services staff members in the programs. Survey responses from Head Start program directors, family and community partnerships managers, and family support services staff members will provide program-level information that is nationally representative of delegate agencies and grant recipients that directly operate programs. In addition, survey responses from family support services staff members will provide information that is nationally representative for those staff members. The study design will ensure nationally representative estimates across three strata: agency type, rurality, and program type. Due to the method for sampling family and community partnerships managers, described in section B2, information about characteristics of family and community partnerships managers, such as education or years of experience, will not be nationally representative. Additional information collected through focus group sessions is not intended to promote statistical generalization to other programs or service populations.



Appropriateness of Study Design and Methods for Planned Uses

The study will collect data through web-based surveys of Head Start program directors, Head Start family and community partnerships managers/coordinators (FCPM), and Head Start family support services staff (FSS). To survey a nationally representative sample of center-based Head Start programs, it will draw a stratified random sample of Head Start and Early Head Start center-based programs from delegate agencies and grant recipients that directly operate programs (see “Other Data Sources and Uses of Information” in section A2, SSA), with strata being agency type, rurality, and program type (see section B2 for more information). To survey a nationally representative sample of FSS, the research team will draw a probability proportional to size (PPS) random sample of FSS, where the size is based on the number of FSS in the program (which is equivalent to an equal probability sample of FSS), and invite them to complete a survey (Instrument 3).



Information about Head Start programs collected from Instruments 1, 2, and 3 will produce nationally-representative information about Head Start center-based programs operated by delegate agencies and grant recipients that directly operate programs. Information from Instrument 3 will produce nationally-representative information about FSS in these programs. Due to the method for sampling FCPM, described in more detail in section B2, information about characteristics of FCPM, such as education or years of experience, will not be nationally-representative. Analyses of survey data from Instruments 1, 2, and 3 will use weighting to account for nonresponse, described in sections B5 and B6. Instrument 4, daily snapshot surveys, is exploratory, and information collected through them is not intended to yield nationally-representative estimates about the day-to-day work and well-being of FSS. The focus groups are not intended to be representative of the range of innovations and ideas, or the views of FSS, in Head Start programs. Key limitations will be noted in written products associated with this study.



As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.





B2. Methods and Design

Target Population

The target population is center-based Head Start or Early Head Start programs, American Indian/Alaska Native (AI/AN) Head Start or Early Head Start programs, and Migrant & Seasonal Head Start or Early Head Start programs, where the programs are run by delegate agencies or grant recipients that directly operate programs. The remainder of the document refers to these programs collectively as
“target programs.”
We will collect information from program directors, FCPM, and FSS employed by up to 470 target programs. For analyses about program-level characteristics and practices, the unit of analysis will be the program. For analyses about characteristics and practices/activities of FSS, the unit of analysis will be the FSS. The FSS will also be the unit of analysis for the focus group data collection.



Sampling

The study will field surveys to a stratified random sample of directors of target programs. The sampling frame will be all delegate agencies and grant recipients that directly operate center-based programs for Head Start, Early Head Start, AI/AN Head Start, AI/AN Early Head Start, Migrant & Seasonal Head Start, or Migrant & Seasonal Early Head Start.2 The three strata, described further in a later section, are agency type, rurality, and program type. The study will sample the FCPM who the target program director identifies as the FCPM who is the most experienced or knowledgeable about family support services in the program. The study will sample FSS with probability proportional to size, where size refers to the number of FSS in the program. The study will conduct focus groups with FSS through a combination of random sampling and purposive sampling.



Surveys

Instrument 1: Survey of Program Directors. First, the research team will draw a stratified random sample of target programs (defined above). In line with the sample frame used in the Survey of Head Start Grantees on Training and Technical Assistance (T/TA study, OMB # 0970-0532), the research team will conduct an intensive cross-walking exercise to develop the sample frame. The research team will use three data files to create the sample frame: (1) the Head Start Program Information Report (PIR); (2) Grantee Locations and Contacts (downloaded from the Head Start Enterprise System (HSES); and (3) a list from the Office of Head Start with Agency ID for current grantees, the old grant number associated with that account, and the current grant number. These three data sources will allow the research team to construct a sample frame that includes only delegate agencies and grant recipients that provide direct services, their respective service locations, and the most up-to-date information about their operating status and point of contact for directors. The study team will contact program directors of the sampled target programs and invite them, or their designee, to complete the program director survey (Instrument 1).


Instrument 2: Survey of Family and Community Partnerships Managers/Coordinators (FCPM). The program director survey (Instrument 1) will ask for the names and contact information of the most experienced or knowledgeable FCPM to contact for this study, along with a replacement name in case the primary FCPM is not available. The study team will use the list of contacts provided on Instrument 1 as the sampling frame for fielding Instrument 2 for FCPM. The study team will contact the FCPM identified by every director (or their designee) who responds to Instrument 1 and provides FCPM contact information. The research team proposes this approach for two reasons: First, the research team anticipates that contacting a FCPM who the director specifies (instead of randomly sampling from a list of FCPM) will improve cooperation/responsivity of FCPM and result in the collection of more accurate information about coordination of Head Start program activities for the target program. This is particularly important for target programs that are larger and have many centers and/or a larger number of FSS. Random sampling of FCPM in these programs could result, by chance, in the selection of FCPM whose program-wide knowledge is limited. Second, as noted, the study team plans to collect the information for an alternate FCPM who will be contacted if the research team does not receive a response from the primary FCPM after multiple attempts. Asking the program director (or their designee) for two names will eliminate the burden involved in re-contacting them for a replacement name. Because programs are randomly selected, program-level information provided by FCPM about those programs will be nationally representative of target programs. Specific FCPM characteristics obtained in Instrument 2, such as education or years of experience, will not be nationally representative of all FCPM but are being collected to provide descriptive information about this segment of the workforce.


Instrument 3: Survey of Head Start Family Support Services Staff Members (FSS). The program director (or designee) will be asked in Instrument 1 to provide the names and contact information of all staff who have responsibilities for family support services and who work directly with families (i.e., all FSS). Program directors in grant recipients that offer direct services and that have delegate agencies will be asked to provide names only of FSS who provide services to families within the grant recipient’s organization, and to not include FSS of delegate agencies. Delegate agencies will be included in the Instrument 1 sampling frame and, if sampled, will provide contact information for the FSS that work directly with families in their delegate agency. The research team anticipates that target programs will have different staffing structures (such as staff who are teachers but who also serve as the family support services staff member for the families in their classroom), and the research team wants to ensure that the FSS list is broad enough to encompass different kinds of staffing structures. It is important to maintain the focus on family support responsibilities, and ensure that the staff who are listed by the director are staff whose jobs include core functions of providing family support services. The second part of the definition about working directly with families is intended as a way to ensure this focus.


To account for program variation in the number of family support services staff members, the research team will sample FSS with probability proportional to size (PPS), where size refers to the number of FSS in the target program. This sampling process is equivalent to an equal probability sample of FSS. The general process involves sampling more FSS in programs that have more FSS, and sampling fewer FSS in programs that have fewer FSS. The team anticipates sampling a total of 1,504 FSS, or four FSS on average per program. Specifically, the team will examine the distribution of the number of FSS per target program, using the list of FSS provided by directors (or their designee) in Instrument 1. The team then will allocate the sample of FSS based on the empirical FSS distribution. This process involves dividing programs into approximately three to five groups (ranging from programs with few FSS to those with many FSS), then sampling FSS within each group to achieve the target sample size. To maintain a nationally representative sample, FSS will be weighted to reflect their probabilities of selection from programs. Sampling FSS clustered within programs, a multi-stage sample, is equivalent to a nationally representative sample of FSS.3 Even though drawing clustered samples of FSS within programs is less efficient – and therefore makes estimates slightly less precise (that is, the confidence intervals around point estimates will be larger) – than a simple random sample, it is an appropriate option for two reasons: (1) it is not possible to draw a simple random sample of FSS because there is no comprehensive listing of all FSS that could be used as a sampling frame, and (2) the relative inefficiency of this clustered approach to sampling does not affect generalizability, still allowing the production of estimates as described later in this section.

Ensuring Representation of Strata of Interest. This section describes the strata of interest for the study. The research team will draw a stratified random sample of target programs to field Instrument 1 to reflect a nationally-representative sample. After fielding closes for each survey instrument (i.e., Instrument 1, Instrument 2, Instrument 3), the research team will assess the sample for appropriate representation in each stratum. Because response rates across categories within stratum for any given instrument likely will vary, the study team expects to construct nonresponse adjustment weights to ensure that the weighted responses will produce nationally-representative estimates. Section B5 discusses these weighting adjustments.


Stratum 1: Agency type: The Office of Head Start categorizes Head Start programs as one of seven agency types (Community Action Agency (CAA), school system, charter school, private/public non-profit (non-CAA), private/public for-profit, government agency (non-CAA), or tribal government or consortium (American Indian/Alaska Native).4 Agency type is often, but not always, mutually exclusive in practice, but the form allows only one choice. Different agency types have varying internal capacity to provide family support services and therefore rely differently on relationships within the community and with community providers. For this reason, family support services and the types of community resources that programs can offer or refer to may be expected to vary based on agency type. The sample will include target programs from each of the seven agency types and as described in the previous paragraph, the study will produce sample weights such that the distribution of agency types represented in the data is nationally representative. There is large variation in the number of target programs of each type. Community action agencies, non-profits, and school systems have the largest representation within the population.


Stratum 2: Rurality: Target programs can be categorized based on whether the program has centers located in a rural census tract, non-rural census tract, or mix of rural and non-rural tracts. The study will sample programs from each of the three categories such that the distribution of target programs by rurality is nationally representative. FSS caseload and community resources may vary by whether programs are in rural settings – and therefore in more dispersed and often less well-resourced areas – or not. Building from the T/TA study definitions, the research team proposes the following steps to categorize target programs based on rurality:

  • Begin with a 6-point Urban-Rural classification (based on the National Center for Health Statistics (NCHS) Urban-Rural Classification Scheme for Counties).

  • From those descriptors, create a 2-point urbanicity indicator (where large central metro, large fringe metro, medium metropolitan, and small metropolitan areas are classified as non-rural, and nonmetro micropolitan and nonmetro noncore areas classified as rural).

  • Using this 2-point definition, identify how many of a target program’s centers are located in rural areas.

    • A target program is defined as “non-Rural” if fewer than 25 percent of their centers are in a rural-classified area.

    • A target program is defined as “rural” if 75 percent or more of their centers are in a rural-classified area.

    • A target program is defined as “mixed” if it does not have 75 percent or more of their centers OR 25 percent or less in a rural area (e.g., a program with one center in the city and one in the suburbs, or a large program with 74 percent rural centers).


Stratum 3: Program type: For delegate agencies or grant recipients that provide direct services, program types are Head Start, Early Head Start, AI/AN Early Head Start, AI/AN Head Start, Migrant & Seasonal Early Head Start, and Migrant & Seasonal Head Start. Program options will include center-based programs only. The sampling frame will exclude delegate agencies or grant recipients that provide only home-based services because home-based programs are qualitatively different than center-based programs.5 The proposed approach will result in a nationally representative sample of grantee organizations and/or delegate agencies that provide direct services through center-based Head Start, Early Head Start, AI/AN Early Head Start, AI/AN Head Start, Migrant & Seasonal Early Head Start, and Migrant & Seasonal Head Start programs.

Sample Size and Power. The nationally-representative survey will have the following sample sizes:6

  • 470 directors (80 percent response rate: 376 completes)

  • 376 family and community partnerships coordinators/managers (80 percent response rate: 301 completes)

  • 1,504 family support services staff members (80 percent response rate: 1,203 completes)

The research team determined these sample sizes based on power calculations at the FSS and target program level. At the FSS level, a study with 1,203 FSS likely is powered to detect statistically significant differences in a particular outcome across 4-5 groups, assuming approximately equal group sample sizes. That is, that sample size is needed to examine variables with up to four to five levels (for example, whether the program is run by a non-profit, community action agency, school, or other); or to analyze two variables with two levels each (for example, less than ten years of experience and more than ten years of experience by not rural/rural and mixed) at a time.


With a sample size of 1,203 FSS, the margin of error (or MOE – that is, the half-width of a confidence interval for a survey estimate) for an arbitrary proportion (for example, the prevalence of some key characteristic measured by the survey) would be 3.5 percentage points. This assumes a moderate design effect of 1.5, reflecting the relative inefficiency of drawing clustered samples of FSS within target programs as compared with drawing a simple random sample. The study would be able to detect differences of 5.2 percentage points or larger between two groups at the 0.05 alpha level (power = 0.80).


Building from a target final sample size of around 1,203 FSS, it is possible to work backwards and calculate an estimated sample size that can be achieved at the target program level. To reach 1,203 completes, 1,504 FSS would need to be sampled (80 percent response rate). Assuming between 2 and 4 FSS per target program, approximately 470 target programs would need to be sampled, for an estimated final target program sample size of 376 programs (80 percent response rate). A base sample size of at least 376 target programs will allow for descriptive analyses. It may be possible to detect statistically significant program-level group differences between 2 groups (that is, on one variable with two levels).



Focus Groups

Drawing from the FSS sample for the nationally-representative survey, specifically the names of FSS that directors (or their designees) provide on Instrument 1, one FSS would be randomly selected from each of up to sixty target programs. The programs from which FSS will be drawn will be selected to represent the variation across the stratifying characteristics of interest described in the sections above (that is, program type, agency type, rurality). In addition, the study team will identify two sub-populations of interest (e.g., American Indian and Alaska Native programs) and select a sample of FSS from that group. In total, six focus groups of up to 10 FSS would be conducted, four drawing from the entire FSS sample and two with a sample drawn from a specific subgroup of interest.



B3. Design of Data Collection Instruments

Development of Data Collection Instruments

Each set of instruments (see Table B3.1) aims to collect unique, but complementary, information about the context, characteristics, and practices of Head Start programs and staff responsible for the provision and coordination of family support services. As such, we sought to minimize item overlap across surveys except in cases where the study team thought it important to be able to triangulate findings across respondents. Surveys draw from existing scales and measures when possible, such as from the Survey of Head Start Grantees on Training and Technical Assistance (OMB # 0970-0532), the Study of Disability Services Coordinators and Inclusion in Head Start (OMB # 0970-0485), and the Head Start Family and Child Experiences Survey (OMB # 0970-0151). To minimize measurement error, multiple-item scales were chosen when available. Many survey items aim to gather information on the practices and processes behind family support service coordination, topics for which there are few relevant existing items or scales. For these topics, the team developed new items, with the aim of minimizing the number of questions to be asked of any one respondent type. The team also aimed to include some similar items as the 2019 Head Start Family and Child Experiences Survey, Spring 2022 data collection (OMB # 0970-0151) to address the request from OMB to capture more details about the Head Start workforce, such as information on wages and benefits. The research team conducted cognitive interviews to pre-test survey items and inform refinements to survey questions. These pre-testing activities asked the same question of fewer than 10 individuals.



Table B3.1: Project Objectives Addressed by Each Data Collection Instrument

Project Objective

Instruments

Build knowledge about how Head Start programs coordinate family support services for parents/guardians

  • Instrument 1: Survey of Head Start directors

  • Instrument 2: Survey of Head Start family and community partnerships managers

  • Instrument 3: Survey of Head Start family support services staff members

  • Instrument 4: Daily Snapshot Survey of Head Start family support services staff members

Understand the characteristics of Head Start programs and staff involved in family support services coordination

  • Instrument 2: Survey of Head Start family and community partnerships managers

  • Instrument 3: Survey of Head Start family support services staff members

  • Instrument 4: Daily Snapshot Survey of Head Start family support services staff members

Gather ideas for innovating and improving how Head Start programs coordinate family support services

  • Instrument 5: Focus groups of Head Start family support services staff members)





B4. Collection of Data and Quality Control

The data for this study is being collected by the NORC at the University of Chicago, MDRC, and MEF Associates, depending on the instrument. Data collection will begin upon OMB approval and is expected to take place over a 10-month period in program year 2022-2023.

The initial recruitment protocol related to Instrument 1 (Survey of Head Start Directors), Instrument 2 (Survey of Head Start Family and Community Partnerships Managers), and Instrument 3 (Survey of Head Start Family Support Services Staff Members) will include a suite of templated emails, a study letter mailed via USPS, and customized outreach by field interviewers. NORC will begin data collection for each sample with an invitation email to all sample members explaining the study and encouraging their participation. See Appendix A for draft Recruitment Materials. Approximately a week following the invitation email, NORC will initiate additional contacts using various strategies to prompt non-respondents. These contacts are intended to reduce non-response and will continue for approximately eleven weeks after the initial contact. Follow-up will be weekly, with the days and times of contacts alternated. Additional recruitment materials will include a letter mailed via USPS and a series of system-generated reminder and break-off emails. In all of these communications, respondents will be provided with a URL, PIN and password to complete the survey online. Roughly a third of the way through each data collection wave, field interviewers will conduct phone prompting and will call each respondent on their work number, offer to answer any questions they may have about the study or their participation, and encourage them to participate via web. Prompting calls will be followed by personalized emails. In the last quarter of data collection, a system-generated email will be sent to non-responders alerting them that the data collection will close in a few weeks. Following this email, field interviewers will make a last chance call during which they will prompt respondents to complete the survey online or offer to complete the survey with them over the phone.


Instrument 4 (Daily Snapshot Survey of Head Start family support services staff members) will be collected by MDRC. Invitations to participate in this data collection activity will be emailed on a rolling basis to FSS who responded to Instrument 3 (Survey of Head Start family support services staff members). Because the Daily Snapshot surveys will be sent out three days a week within a single week, a reminder email will be sent the morning after each email to prompt for a response to the prior day’s survey. If a respondent did not respond to a single survey in a week, approximately a week following the invitation email, MDRC will initiate additional contacts via email to prompt non-respondents one additional time. These contacts are intended to reduce non-response. A similar method will be employed for Instrument 5 (Focus groups of family support services staff members) by MEF Associates. See Appendix A for draft Recruitment Materials. These are considered drafts because the team may need to modify recruitment materials in response to requests from or the needs of different agency types (e.g., AIAN, school districts) or programs.


Mode of data collection

This study will primarily collect information through web surveys (Instruments 1-4). All sample members will be invited and encouraged to participate via web. For Instruments 1-3, non-responders will be offered a telephone option in the closing weeks of each data collection wave. Because Instrument 4 is a daily snapshot and collected three times within a week, a telephone option will not be offered. Focus groups (Instrument 5) will be conducted via a video/phone conferencing platform.


Monitoring data collection activities for quality and consistency

For Instruments 1-3, prior to interacting with sample members and collecting data, all interviewers will have completed NORC’s General Interviewing Training program as well as project-specific interviewer training. As part of the project-specific training, interviewers will learn about the importance of the study and how the data will be used, allowing them to respond accurately to any questions they may receive from respondents and to sincerely encourage participation. Interviewers will also be trained to administer the surveys, affording respondents who are not comfortable or who are unwilling to complete the survey on-line an opportunity to participate via phone.


For Instrument 4, throughout data collection, response data and paradata will be monitored and any abnormalities will be investigated. Qualtrics has powerful tools to ensure data quality (e.g., skip patterns, branching, recoding of values) and data will be reviewed after testing/before launch and after each round to ensure there are no unforeseen problems. If an issue negatively impacting data quality is identified, remediation efforts including retrieval of partial or whole interviews would be considered.


For Instrument 5, experienced focus group facilitators will be trained prior to data collection beginning. As part of this training, facilitators will learn about the importance of the study and how the data will be used, allowing them to respond accurately to any questions they may receive from focus group participants. Facilitators will also be trained to administer the focus group protocol with groups that include participants from diverse racial and ethnic backgrounds. The training will also prepare facilitators to host focus groups remotely (via phone conference call) and on how to navigate the conversation when participants are not comfortable or are unwilling to participate or respond to a particular question.



B5. Response Rates and Potential Nonresponse Bias

Response Rates

For Instruments 1, 2, and 3, we will report unit response rates in accordance with the standards endorsed by the American Association for Public Opinion Research (AAPOR), particularly standard formula RR3:7

RR3=I/[(I+P)+(R+NC+O)+e∙(UO)]

where (using information collected on the disposition of each case)

I = Complete interview (1.1)

P = Partial interview (1.2)

R = Refusal and break-off (2.10)

NC = Non-contact (2.20)

O = Other (2.30)

UO = Unknown eligibility, non-interview (3.0)

e = Estimated proportion of cases of unknown eligibility that are eligible


As this is a multi-stage design, we will report response rates both by stage and cumulatively. Because units are sampled with unequal probabilities at different stages, we will also examine weighted response rates (using rates weighted by base weights that are the inverse of the selection probabilities).


During data collection, we will monitor participation rates as well as refusal, non-contact, and other rates that comprise the nonresponse rate. Each of these components serves as a process indicator for survey operations.


Item nonresponse rates will be reported as simple percentages (percent missing, among responses), breaking out refusals, “don’t know” responses, and invalid responses.


Instruments 1, 2, and 3 expect a response rate of 80 percent each. The 2016 Head Start Health Managers Survey (HSHM) (OMB# 0970-0585) and the 2019 Head Start Training and Technical Assistance (T/TA) Survey (OMB# 0970-0532) obtained response rates of more than 80 percent for their Phase 1 director surveys. These studies used the same design we are proposing – first surveying the universe of Head Start program directors to obtain contact information on the target population and then fielding a second survey of managers and staff using the information provided by directors. The proposed Head Start Connects surveys will be fielded in 2022-23, when current staffing shortages in Head Start programs may still be present. These shortages may lead to challenges in obtaining the target response rates. The study team will have an active email box to communicate with program staff to obtain updated contact information as needed.


The daily snapshot survey (Instrument 4) expects a response rate of 50 percent, defined as a response to one or more of the six activity snapshot invitations. This estimate is conservative, based on MDRC’s experience conducting a time use survey of preschool administrators and teachers in fall 2020 and spring 2021. The combined teacher response rate across fall and spring was 52 percent, and the administrator response rate across fall and spring was 72 percent. These snapshot surveys are exploratory, and information collected through them is not intended to yield nationally-representative estimates.


The focus groups (Instrument 5) are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported.


NonResponse

In addition to examining various response rates as described previously for Instruments 1, 2, and 3, once data collection and data cleaning are complete, we will conduct a non-response analysis, focusing on program-level variables available from the frame (and whose values are thus known for both respondents and non-respondents) that are known to be associated with key survey measures. If the response rate varies across the levels of these variables, they will be considered as covariates for non-response adjustment factors in the weighting process.


Informed by results of the non-response analysis, we will employ the method of weighting classes to derive non-response factors for adjusting base weights.8 We will take care that cell size counts are not too small, imposing minimum counts for numerators and denominators of fractions and collapsing across categories needed. We will then consider weight trimming to reduce the effects of extreme values of the factors (although using sufficiently large cells should obviate the need for this), capping their values and redistributing the excess proportionately if indicated.


We will consider an additional calibration or post-stratification stage in the weighting process, using known population totals for variables not available from the frame as benchmarks, if warranted. This calibration would take the place of noncoverage adjustment; however, we could do a noncoverage adjustment instead if the PIR is updated by the end of the data collection period.


We will report item non-response for key indicators and will note any items with higher than usual missingness rates.


For Instrument 4 (daily snapshot), the findings are intended to be exploratory and will not produce nationally representative estimates. To provide some insight into the representativeness of the sample for the snapshot, the study team will compare selected characteristics of Instrument 4 respondents (e.g., defined as ever responding to one of the six snapshot invitations, or responding to all six snapshot invitations) with characteristics of the weighted sample of FSS respondents to Instrument 3.


For Instrument 5 (focus groups), participants will not be randomly sampled and findings are not intended to be representative, so non-response bias will not be calculated. We will keep track of how many family support services staff members refuse to participate in the focus groups.


B6. Production of Estimates and Projections

The use of probability sampling techniques and weighting for sample design, non-response, and noncoverage will allow us to make weighted estimates from Instruments 1, 2 and 3 that generalize to the populations described in Section B2: grantee organizations and/or delegate agencies that provide direct services through center-based programs in Head Start, Early Head Start, AI/AN Early Head Start, AI/AN Head Start, Migrant & Seasonal Early Head Start, and Migrant & Seasonal Head Start, and the corresponding populations of Head Start directors and family support services staff (FSS) within those programs. Because programs are randomly selected for Instrument 1, program-level information provided by FCPM in Instrument 2 about those programs will be nationally representative of programs. Specific FCPM characteristics obtained in Instrument 2, such as education or years of experience, will not be nationally-representative of all FCPM.


Extensive information about the population of interest is available at the target program level from the frame file to be created as an adjunct to the PIR for both sampled and non-sampled respondents and nonrespondents, to use in the creation of weights (as described above). The rich covariate information that the frame provides allows us to create robust weighted estimates that are population-representative and unbiased. Estimates of sampling error will be based on the sample design and the analysis weights; we will create variables describing the stratified sample design suitable for the Taylor series variance estimation method that software packages such as SAS and SUDAAN support. These variance estimates (and the related standard errors) will be the basis for constructing confidence intervals about weighted survey estimates and will further allow inference extending to regression models and hypothesis tests.9



The information collected is meant to contribute to the body of knowledge on ACF programs. It is not intended to be used as the principal basis for a decision by a federal decision-maker, and is not expected to meet the threshold of influential or highly influential scientific information.



B7. Data Handling and Analysis

Data Handling

All web-based surveys will be programmed and tested prior to fielding to ensure accurate administration and to minimize errors during data processing. Electronic notes will be taken during focus groups and will be stored in a secure, password-protected location. Audio recordings from focus groups will be gathered on secure, password-protected audio recorders or done via ZoomGov. Access to the data will be granted on a need-to-know basis and only the Data Manager and study team members with a need to know will have access to the data.


Data Analysis

The study is designed primarily for quantitative, descriptive data analysis. Using survey data, descriptive analyses will include typical (average, median, modal) values of metrics and variation (min, max, percentiles, standard deviation) in values of metrics for addressing research questions. Associative analysis will be conducted, prioritizing analyses where pre-specified hypotheses based on prior studies suggest a relationship. Bivariate (unconditional) associations may be examined between context (such as agency type, rurality) and inputs or family support services activities.


Descriptive analyses using data collected from Instrument 4 will include typical values of metrics for addressing research questions and variation in values of metrics for addressing research questions, and longitudinal trajectories of activities or psychosocial outcomes. These descriptive analyses may be examined by levels of the program-level stratification categories if pre-specified hypotheses suggest variation.


Additionally, the team will pull characteristics from, and if warranted calculate descriptive statistics with, PIR data to describe program characteristics (e.g., number of Head Start funded slots, name of management information system used) and structural characteristics of the participating target programs. 


The focus group notes will be transcribed and uploaded into a qualitative coding software. Thematic analysis will be conducted to draw out key themes about current innovations and possible new innovations.


The study will be registered prior to the initiation of data collection with an appropriate registry such as Open Science Framework.


Once analysis for the study is completed and the results are published, we plan to archive the analysis variables and measures with documentation to support accurate estimates and projections for secondary analysis. This documentation will include code books, user manuals, file structure, variables, sample weight, and methods. The documentation will also include information about the types of data collected, data handling procedures and storage methods, procedures for data prep and level of restriction for different data.



Data Use

Reports and/or briefs will be published to summarize the findings from the Head Start Connects study. Publications will include details on the data analyses conducted, interpretation of the findings, and study limitations. Findings aim to answer key open questions in the field about how Head Start programs coordinate and individualize family support services in line with families’ needs.



B8. Contact Persons

Sarah Blankenship, OPRE, Sarah.Blankenship@acf.hhs.gov

Paula Daneri, OPRE, Paula.Daneri@acf.hhs.gov

Carolyn Hill, MDRC, Carolyn.hill@mdrc.org

Michelle Maier, MDRC, michelle.maier@mdrc.org

Marissa Strassberger, MDRC, marissa.strassberger@mdrc.org

Patrizia Mancini, patrizia.mancini@mdrc.org

Carol Hafford, NORC, hafford-carol@norc.org

Marc Hernandez, NORC, hernandez-marc@norc.org

Christopher Johnson, NORC, johnson-christopher@norc.org

Shannon Nelson, NORC, nelson-shannon@norc.org

Kate Stepleton, MEF Associates, kate.stepleton@mefassociates.com

Carly Morrison, MEF Associates, carly.morrison@mefassociates.com


Attachments

Appendix A: Draft Recruitment materials

Appendix B: Draft Informed Consent Forms

Instrument 1: Survey of Head Start Directors

Instrument 2: Survey of Head Start Family and Community Partnerships Managers

Instrument 3: Survey of Head Start Family Support Services Staff Members

Instrument 4: Daily Snapshot Survey of Head Start Family Support Services Staff Members

Instrument 5: Focus Groups of Head Start Family Support Services Staff Members

1 The Head Start (HS) Connects Study: Individualizing and Connecting Families to Family Support Services information collection (OMB # 0970-0538) was approved in January 2020 and data collection for that phase is complete.

2 Delegates or grant recipients that operate home-based-only programs will be excluded from the population of interest and the sampling frame.

3 Valliant R, Dever J, and Kreuter F, Practical Tools for Designing and Weighting Survey Samples, 2013, Springer, New York.

5 Early Head Start-Home Based Program Option was one of four national models included in the Mother and Infant Home Visiting Program Evaluation (MIHOPE): https://www.acf.hhs.gov/sites/default/files/documents/opre/mihope_report_to_congress_final.pdf.

6 The numbers of FCPM and FSS listed here are lower than what is listed in Table A12.1 to allow for higher-than-expected response rates and to ensure that burden is not underestimated.

7 The American Association for Public Opinion Research. 2015. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 8th edition. AAPOR. Accessed at https://www.aapor.org/Standards-Ethics/Standard-Definitions-(1).aspx.

8 Kalton G and Flores-Cervantes I, Weighting Methods, 2003, Journal of Official Statistics, Vol. 19, No. 2, pp 81-97.

9 Valliant R, Dever J, and Kreuter F, Practical Tools for Designing and Weighting Survey Samples, 2013, Springer, New York.

11


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDaneri, Paula (ACF) (CTR)
File Modified0000-00-00
File Created2022-07-22

© 2024 OMB.report | Privacy Policy