Alternative Supporting Statement for Information Collections Designed for
Research, Public Health Surveillance, and Program Evaluation Purposes
Financing for ECE Quality and Access for All:
Key Informant Interviews for an Environmental Scan of Federal and State Financing Policies
Formative Data Collections for ACF Research
0970 - 0356
Supporting Statement
Part B
August 2022
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officer: Amy Madigan
Part B
B1. Objectives
Study Objectives
The Office of Planning, Research, and Evaluation (OPRE) at the Administration for Children and Families (ACF) under the U.S. Department of Health and Human Services (HHS) proposes to conduct key informant interviews as part of an environmental scan of state and federal early care and education (ECE) financing policies. The primary objectives of this environmental scan are to understand (1) what federal-, state-, and local-level policies exist on combining Head Start funding with other funding streams (henceforth referred to as “braiding”) to support high quality ECE programming, and (2) how existing state- and local-level ECE funding policies may or may not support expanding program access, quality, and equity. OPRE aims to better understand the connections between written policy and on the ground implementation of braiding funding by studying (1) the organizational strategies and approaches to braiding at scale—including understanding the relationships within and among organizations and systems—and, (2) the experiences of key staff at multiple levels across states as they implement written policy. To that end, OPRE and its contractors (henceforth referred to as the research team) will collect information through semi-structured, informational interviews of key informants at multiple levels of the ECE system. The findings from these interviews will be used in combination with findings from a policy scan to inform future ACF data collections and study design, including a nationwide descriptive survey and a multi-case study of promising and informative braiding approaches. 1
Generalizability of Results
This study is intended to present an internally valid description of the implementation of mostly state policies at the Head Start program- and/or site-level in selected states and is not intended to promote statistical generalization to other sites or populations. The purposive sampling approach to the selection of key informants precludes national generalizability. Based on our recruitment and sampling plan, this collection will focus on roles/positions within states that exhibit promising policy approaches to braided funding. Key informants may fill in gaps regarding additional attributes of braided funding approaches not captured or documented in official policy documents. They may also identify innovative or common approaches. The goal is to cultivate a deeper understanding of how braided funding approaches are implemented at the program- and site-levels to inform future descriptive research. The information collected will be used to further contextualize the policy scan and provide overall guidance for the development of a nationwide descriptive survey, including sampling methodology and item development.
Appropriateness of Study Design and Methods for Planned Uses
In-depth interviews with key informants are the most appropriate methodology to answer our research questions (see Supporting Statement A, part 2)—and subsequently inform our policy scan and future descriptive study design—because they allow informants to share key implementation perspectives and correct any gaps in research team knowledge. The policy scan portion of the proposed environmental scan, which is based on publicly available information, is limited in its ability to capture implementation perspectives. The holistic descriptions provided by key informants at multiple levels can guide further, targeted questioning at a national level under a future, nationwide descriptive study. No causal links between written policy and implementation activities can or will be measured as a part of this proposed collection. The key limitations of the design listed here will be included in any and all written products associated with this study. As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.
B2. Methods and Design
Target Population
The research team will utilize a scan of state-level braiding policies (see respondent recruitment description below) and recommendations from federal project officers (FPOs) to inform the selection of key informants. All respondents will be selected from within the policy scan sample states and recommendations from FPOs (see “Appendix 2—Environmental Policy Scan Sampling Memo”). The sampling frame will be those most knowledgeable within their respective state Head Start/ECE systems to fill knowledge gaps identified by the policy scan portion of the environmental scan. Specifically, the research team will sample amongst Head Start program directors, Head Start finance managers, Early Head Start Child Care Partnership directors, state-level policy implementation administrators, state-level CCDF or PDG-focused staff, Head Start collaboration directors, regional Head Start staff, and T/TA providers from across the country as informed by the policy scan. In addition to states identified in the policy scan, other state or program leaders may be selected based on recommendations from the project and FPO experts. For example, while the selection of states for inclusion in the ongoing policy scan was based on a range of factors to ensure maximum variability in our sample, the team may select specific states that are implementing innovative or unique practices based on FPO knowledge. See “Instrument 3—Topic Guide for Semi-Structured Interviews” for an overview of all potential topics of discussion by protocol and respondent type.
Based on the findings of that policy scan, the research team will rely on non-probability, purposive sampling of key informants across states that use a variety of approaches to braided funding, including those that exhibit intriguing, innovative, or potentially informative policy approaches to braided funding. Interviews will provide insight into local policy implementation that is unlikely to be publicly documented. Because respondents will be purposively selected, they will not be representative of the broader population of people in these roles.
Respondent Recruitment
See “Appendix 3—Recruitment Outreach Materials” for all recruitment materials.
Respondent Identification. The research team will take the results of a policy scan of 20 sampled states and identify knowledge gaps where key informants may provide additional detail regarding the linkages between policy and practice. In consultation with FPOs and using the results of the policy scan, the team may identify additional states engaged in unique or innovative practices related to braided funding. The team will then use publicly available information about the roles within key states to construct an initial roster of 16 individuals who may be able to answer key questions arising from the policy scan. The initial roster will be further refined or revised based on findings from the first interviews and recommendations from respondents about who may best address the questions. Regardless of any changes, the team will conduct no more than 16 interviews. Ultimately, the roster will include individuals in key positions across multiple levels of policy and implementation, including program-, state-, and regional/federal-level staff an across a variety of states in order to get a sense of how policy implementation and practice may vary across contexts. Including a mix of individuals will support information gathering on both policy generation and implementation. It will also allow for greater understanding of the interactions across systems of policy and implementation.
Respondent Recruitment. The research team will follow the recruitment scripts laid out in Appendix 3. Outreach procedure includes an initial email invite with follow-up language as needed. Following each interview, respondents will receive the thank you email accompanied by their honoraria information.
B3. Design of Data Collection Instruments
Development of Data Collection Instruments
As discussed in Supporting Statement A, Exhibit A2.1: Data Collection Activities, the study instruments include interview protocols for three respondent types, within which all potential respondent roles fit (see Exhibit A2.1 in Supporting Statement A for more details).
The research team created a comprehensive draft matrix of all possible constructs/topics, mapping each to the roles most likely to hold the knowledge sought. See “Instrument 1—Topic Guide for Semi-Structured Interviews” for more details. At the conclusion of the policy scan, the team will refine this matrix based on findings and construct three streamlined interview protocols for use with respondents. No constructs will change, but the number of questions for each protocol in Instrument 1 will be prioritized and pared down to conform to the proposed burden in Exhibit A12.1. Additionally, question wording may change to ensure the team solicits and receives the desired information for each construct.
All constructs and related protocol items are project-generated and uniquely tailored to the respondent type. The research team will generate protocols by “level” (program, state, and regional/federal) that include questions most relevant to policy understanding and implementation for staff at each level. Within each level, multiple staff types/roles may be recruited but each will receive the same protocol. For example, at the program level the research team may recruit both Program Directors or Fiscal Managers, depending on who is the respondent most knowledgeable of braided funding implementation within that program. The team will also oversee the training of interviewers to ensure standardized interviewing practice (see “Data Collection Quality and Consistency” below).
B4. Collection of Data and Quality Control
Data Collection Quality and Consistency. Preliminary recruitment emails will be sent by OPRE contractor NORC at the University of Chicago. The same email template will be used to ensure consistency in the information provided and collected.
The recruitment and interviewing process will be overseen by senior-level research team staff and conducted by junior-level research team staff, who will be trained in proper data collection techniques with the final respondent protocols. Senior staff will conduct a one-hour training for all junior interviewing staff to address general and protocol-specific concerns to ensure consistent, efficient, and culturally responsive data collection. Training topics will include:
Study purpose, research questions, and conceptual framework.
Primary data collection measures and instruments (i.e., program-, state-, and regional/federal-level protocols).
Respondent privacy and informed consent procedures.
Documentation and data handling/security.
Note cleaning, coding, and analysis procedures.
Semi-Structured Interviews. Interviews will be conducted virtually and audio recorded, with respondent permission. If permission is not granted, the interviewer will take notes. All interviews will take place virtually over Zoom, and all respondents will be given a call-in option to aid accessibility. All audio recordings will be transcribed, and all transcriptions will have all personally-identifying information removed. Interview transcripts (or notes if permission to record was not granted) will be coded and subjected to inter-coder reliability checks.
B5. Response Rates and Potential Nonresponse Bias
Response Rates
The interviews are not designed to produce statistically generalizable findings and participation is wholly at the respondent’s discretion. Response rates will not be calculated or reported.
NonResponse
As participants will not be randomly sampled and findings are not intended to be representative, non-response bias will not be calculated. Respondent demographics will be documented and reported in written materials associated with the data collection; however, the research team will record refusal rates and refusal demographics where applicable.
B6. Production of Estimates and Projections
The data will not be used to generate population estimates, either for internal use or dissemination.
B7. Data Handling and Analysis
Data Handling
Qualitative data cleaning and coding procedures. Senior researchers from the research team will oversee all data collection and analysis procedures, with input and support from the broader team as needed. The senior researchers leading this task will be responsible for training all staff involved in data collection as well as training for qualitative coding and analysis procedures using Dedoose software.2 Following each interview, junior researchers will clean and prepare the interview transcripts for coding. All transcripts will be scrubbed of personally identifiable information before uploading to Dedoose. Every researcher involved in an interview will meet together and hold targeted, consensus-building discussions, to answer specific questions and discuss convergent or conflicting coding and themes. The team will also review and potentially add inductive codes that have emerged. The senior researchers overseeing data analysis will conduct a random spot-check of coded transcripts to check agreement with coding decisions and gauge inter-rater reliability.
Data Analysis
Developing the qualitative coding scheme. As a precursor to the analysis, the research team will develop a codebook of a priori codes based on the research questions and constructs from the topic guide (Instrument 3). Additional inductive codes may be added during the analysis process as we compare and contrast information across sites and discuss understandings of the data.
Conducting within-site and cross-site analyses on key constructs. Using the query tools in Dedoose, the research team will retrieve coded data relevant to the research questions and conduct a tiered analysis to identify key themes within and across respondents. Respondent analysis will focus on respondent understanding of braided funding approaches, as well as structural supports or barriers to implementation of approaches. The research team will use cross-respondent analyses to observe how approaches differ based on state contexts, as well as how understandings of supports and barriers differ based on the “level” of staff role. The research team will conduct analyses by respondent “level” on similar constructs to obtain a well-rounded understanding about multiple aspects of the braided funding process. These findings will be integrated alongside findings from the policy scan for internal use by the research team to develop a project-related study design.
Data Use
The research team will initially compile findings from this environmental scan (both the policy scan and key informant interviews) for internal ACF use. Findings may appear in publicly available briefs, reports, webinars, or conferences at a later date. Purposes may include sharing information, such as broad themes or key takeaways, and/or providing rationale for the design of future research activities. If so, study design limitations will be clearly noted in all publicly available materials. This includes the fact that these results are not intended to be representative of or generalizable to any given subpopulation, but rather to provide descriptive information about what approaches are being implemented in various locales and what their key challenges are.
B8. Contact Persons
Name |
Affiliation |
Email Address |
Amy Madigan, PhD |
Office of Planning, Research, and Evaluation |
|
Jackie Gross, PhD |
Office of Planning, Research, and Evaluation |
|
Paula Daneri, PhD |
Office of Planning, Research, and Evaluation |
|
Stacy Ehrlich, PhD |
NORC at the University of Chicago |
|
Sarah Kabourek, PhD |
NORC at the University of Chicago |
|
Margery Wallen, PhD |
Independent Consultant |
|
Cristina Carrazza, PhD |
NORC at the University of Chicago |
|
Gretchen Streett, MA |
NORC at the University of Chicago |
|
Mitch Barrows, MA |
NORC at the University of Chicago |
Attachments
Appendices
Appendix 1—Consent Language for Data Collection
Appendix 2—Environmental Policy Scan Sampling Memo
Appendix 3—Recruitment Outreach Materials
Instruments
Instrument 1 —Topic Guide for Semi-Structured Interviews
1 A full information collection request will be submitted to the Office of Management and Budget for these future activities.
2 Dedoose (version 8.0) is a web application for managing, analyzing, and presenting qualitative and mixed method research data (2019).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Gross, Jacquelyn (ACF) (CTR) |
File Modified | 0000-00-00 |
File Created | 2023-10-17 |