Supporting Statement B_08-15-23_Clean

Supporting Statement B_08-15-23_Clean.docx

Healthy Start Evaluation and Capacity Building Support

OMB: 0906-0076

Document [docx]
Download: docx | pdf

Supporting Statement Part B

Healthy Start Evaluation and Capacity Building Support

OMB Control No. xxxx-xxxx

B. Collection of Information Employing Statistical Methods

In this Supporting Statement B, we present information about our statistical methods and how we determined the methods we will use for the evaluation of the Healthy Start Program. As noted in Supporting Statement A, the goal of this evaluation is to use data to conduct ongoing performance monitoring; obtain credible evidence of program effects and outcomes; meet needs for accountability; and identify best and promising practices to support sustainability, replication, and dissemination of the program.

1. Respondent Universe and Sampling Methods

The respondent universe and sampling methods are described below by data collection activity and summarized in Table B-1. Expected response rates for the data collection activities, as a whole, are expected to meet or exceed 40% (see Section B.3 for additional information). Data collection instruments are included in Attachments B1–B5.

Healthy Start Monitoring and Evaluation Data System

Healthy Start Monitoring and Evaluation Data System (HSMED) data (OMB No. 0915-0338) includes information about participant demographics; health care access and utilization; personal well-being; health behaviors; pregnancy and childbirth history; mother and child health history; home life; parenting practices; and pregnancy outcomes such, as low birthweight, preterm birth, and infant mortality, for all participants across all 101 grantees. We also collect information on enrollment date and time in the program. HSMED data will be used to examine associations between exposure to the HS program and pregnancy outcomes and to analyze black/white disparities in outcomes. OMB approval for HSMED data is not being sought through this package.


Healthy Start Program Survey

All Healthy Start (HS) programs will be asked to complete the Healthy Start Program Survey to ensure that consistent information is collected about implementation across the sites and to enable analysis of variation in implementation to contribute to implementation and utilization studies. One-hundred and one Healthy Start programs are funded for the grant period of April 1, 2019 through March 31, 2024. Program directors are expected to take the lead in responding to the survey but may delegate sections of the survey to other program staff.

Healthy Start Network Survey

The Healthy Start Network Survey will be administered in a subset of 15 Healthy Start programs that will constitute case study sites for the evaluation to elicit common findings from different HS programs that have performed well, have higher levels of participation, in a diversity of settings, and may have more lessons learned regarding implementation successes and challenges than less experienced programs.. Every Healthy Start program is required to develop a Community Action Network (CAN), which includes local organizations in the community with an interest in improving maternal and child health. The 15 case study programs will be selected based on criteria that includes experienced grantees that have well-established CANs along with a successful record of reaching Healthy Start benchmarks and a sufficient number of CAN members and HS participants as reported in the data submitted separately to HRSA’s data systems1 in the calendar year preceding selection of the case study sites. Criteria will help promote a diversity of settings across organizations and will include a mix of geography (e.g., urban, rural), facility type (e.g., health departments, non-profits), and participant population (e.g., African American, Hispanic, tribal), as well as other possible factors. The Healthy Start Network Survey will be fielded to approximately 40 members in each CAN for a total of 600 eligible survey respondents across the 15 case study sites.

Healthy Start Participant Survey

The Healthy Start Participant Survey will be administered in the subset of 15 Healthy Start programs that constitute the case study sites as described in the Healthy Start Network Survey above. The survey will be fielded to approximately 50 currently enrolled and active adult participants (women and male partners/fathers of child) who receive Healthy Start services in each program for a total of 750 eligible survey respondents across the 15 case study sites.



Healthy Start Stakeholder Interview Guide

The Healthy Start Stakeholder Interview Guide will also be administered in the subset of Healthy Start programs that will constitute the 15 case study sites. During site visits to each of the 15 case study communities, up to 10 key informant interviews with stakeholders will be conducted for a total of 150 interviews. The number of key informant interviews that can be scheduled within the allotted time will depend on coordination and the amount of travel time required between interviews.

The program director at each Healthy Start site will be asked to identify staff members who have regular interactions with Healthy Start participants. The stakeholder interviews will be conducted with Healthy Start administrative and service staff (e.g., program director, CAN coordinator, case managers for women and for fathers/male partners, data/evaluation team members, fatherhood coordinator, and other identified Healthy Start staff).

Table B.1. Potential Respondent Universe and Sample

Form Name

Number of Entities in the Universe

Healthy Start Program Survey

101 individuals (1 from each of the 101Healthy Start-funded projects) who are in the Program Director role are eligible to be surveyed. Based on similar surveys in previous evaluations, we expect a response rate of 95%.

Healthy Start Network Survey

Convenience sample of up to 600 active members that constitute the Community Action Networks (CAN) in the 15 case study sites are eligible to be surveyed. Healthy Start programs across the country vary in the members who constitute the CAN, but they are typically composed of diverse membership and represent different sectors of the community, including 25% of members who are HS participants or people with lived experience similar to that of HS participants. We estimate that including members who have actively participated in the CAN in the past year by attending at least two meetings will result in approximately 40 eligible members at each of the 15 case study sites. Based on experience with other network surveys, we expect a response rate between 50% to 70%.

Healthy Start Participant Survey

Convenience sample of up to 750 potential participants who currently receive services in the 15 case study sites will be eligible for this survey. Each Healthy Start program is expected to serve 300 pregnant women; 300 infants/children up to 18 months, preconception women, and interconception women (combined); and 100 fathers/male partners affiliated with Healthy Start women/ infants/ children, per calendar year. We have restricted eligibility to include only adult participants who are currently enrolled in the program. This leads us to estimate approximately 50 eligible participants in each of the 15 case study sites. Based on our proposed multi-method approach to assisting participants via email or telephone if they are unable to use the web survey, we expect a response rate of 40-60%.

Healthy Start Stakeholder Interview Guide

Convenience sample of up to 150 key informants will be eligible for interviews involving 5-10 administrative and service staff in each of the 15 sites. Based on experience with similar activities and a typically high level of motivation from Healthy Start staff with heavy workloads, we expect a response rate of 70-80%.



In addition to these four instruments, the evaluation will also use secondary data to for which clearance is not requested. These include data on HS grantees’ performance measures reported to HRSA’s Discretionary Grants Information System (DGIS); HS participant data that grantees report to the Healthy Start Monitoring and Evaluation Data system (HSMED); additional participant data from up to nine HS grantees that collect, but do not report to HSMED, additional data on the types and quantities of HS services received by HS participants; and vital records data obtained from one state. All the data obtained from these secondary sources will be statistically analyzed to answer some of the evaluation questions on the extent to which HS grantees meet program benchmarks (data from DGIS) and the extent to which HS improves health outcomes for women and children (data from HSMED, additional participant data from up to nine grantees, and vital records data from one state). The analysis of these data is described in Supporting Statement A.

Rationale for the Evaluation Design and Limitations of the Evaluation

The overall evaluation design plan has been conceptualized as a comprehensive assessment of HS activities across grantee programs and attempts to capture individual -, organizational-, community-, and larger social-level factors to help explain program implementation processes, and their association with service utilization, participant behaviors (e.g., parenting practices), and health outcomes of participants. The evaluation also will examine best practices shown by some grantees that may be applicable to other HS communities. We recognize that evaluations of community-based programs such as HS have challenges because we cannot attribute program outcomes to the contribution of HS alone. This is because HS programs in their respective communities do not operate in isolation. HS grantees may participate in multi-sectoral activities, receive funding from other sources, and do similar work, which cannot be isolated from the work they do for HS. Furthermore, HS programs adapt to the needs of their community and are not uniform in their approaches, interventions, or definitions across the country. HS participants too may receive similar or complementary inputs from other programs in the community.

The current HS outcome evaluation design builds on the lessons learned from the challenges of the previous HS evaluation completed in 2020. The previous design used a matched analysis for HS on a national level and used three maternal and child health-related data sources (the HS client data, vital records of live births and infant deaths, and the CDC Pregnancy Risk Assessment Monitoring System or PRAMS). Although a comprehensive evaluation that addressed several important topics, it experienced limitations related to data quality; time-consuming data linkage processes; lack of baseline data that made it difficult to account for preexisting risk factors; and the variability in the duration of services provided to individual clients during this time. Together, these factors limited the evaluation’s ability to attribute observed differences to the program. The previous matched design also imposed a burden on grantees, who had to participate in lengthy data use agreements with CDC/PRAMS, vital records office (VRO) and HRSA, and assist with obtaining consent from each HS participant to match her personal identifying information with other data sources. Taking these challenges into consideration, we have proposed the current outcome evaluation design (described below), which makes the role of baseline data less crucial and utilizes a comparison group analysis that will use publicly available vital records data from one state that do not have to be linked to HS participant data at the individual level. Therefore, it does not impose a burden on grantees to obtain additional participant consent or engage in data sharing agreements with other agencies. We have proposed to use a dosage analysis in which dosage refers to the level of exposure (e.g., duration or amount) to HS activities and services. Dosage analysis makes it possible to apply standard regression with duration/amount as the covariate and the target outcomes as dependent variables. It estimates the effects of the intervention within the HS participant population without the need for an external control group. We are supplementing the basic dosage analysis with additional analysis using a smaller sample with limited variables to strengthen and support the dosage model.

Dosage analysis will measure the association of the HS program with important health outcomes for mothers and infants. It hypothesizes that desirable outcomes for HS participants and their children rise steadily with the amount of program “dosage” received—i.e., the amount of interaction and/or services the HS program supplies to that mother. It then uses a regression analysis to capture the trend in this relationship, tracking movement from mothers with high doses to mothers with lower doses. Once the movement of the outcome with a given dosage change is “discovered,” the model can be used to project each HS participant’s expected outcome at 0 dosage—e.g., what a particular mother’s depression score or a particular newborn’s birth weight would have been had she received no HS inputs. This projected outcome at 0 then serves as a “counterfactual” against which to measure the impact of the HS program on that mother or child—defined as her/his actual outcome minus her/his predicted outcome at 0 dosage. The primary reason for using the dosage analysis methodology is because of the difficulty in obtaining data that can be used to construct effective post-hoc control groups across the full program. The outcome evaluation will estimate a series of dosage models. The ideal dosage model would include every interaction of the participant with HS, but participant data reported in HRSA’s HSMED does not record every interaction. Therefore, the outcome evaluation will begin by approximating the ideal dosage analysis by using time spent participating in the HS program as a proxy for the quantities of all types of program inputs (interactions and services) received. The regression model will depend on the specific outcome/dependent variable being modeled, but we expect that we will be able to stay within the class of generalized linear models (linear regression, logistic regression, Poisson regression, etc.).

The initial, basic dosage analysis will capture the extent to which outcomes (e.g., preterm birth, low birthweight, and infant death) rise or fall as one moves from participants with long durations of HS services to recipients with shorter durations. Duration is determined from the date of a participant’s enrollment in HS to her estimated date of delivery. The model will hold constant measured background characteristics of participants and any measurable factors that likely affect both duration in the program and the outcome of interest, without the former causing the latter (e.g., age at pregnancy, educational attainment, race/ethnicity, income, insurance, timing of entry into prenatal, poor prior pregnancy outcome, preventive care, usual source of care, pre-pregnancy medical conditions, and substance use). After controlling for these factors, the trend in outcomes with duration derived by the model can be projected to give each HS participant’s expected outcome had that participant received no HS services (dosage=0).

We are supplementing the basic dosage analysis with additional analyses, which are only possible on a smaller sample of grantees who regularly able to collect the additional data, and with limited variables. Therefore, an enhanced dosage model will be used to strengthen inferences about HS association with outcomes by going beyond treating duration of program participation as the sole aspect of women’s HS experiences to influence program impacts. The evaluation will use data from up to nine grantees that collect participant data on the type and quantity of HS services to be able to reasonably measure HS inputs (e.g., number of prenatal visits arranged by HS, the number of prenatal depression support/services visits received from HS, and number of prenatal mental health counseling service sessions received from HS). The results of the basic and enhanced dosage analyses will be compared to determine the extent to which the information provided by the additional participant data from grantees is necessary to obtain an approximate representation of HS program dosage.

We are proposing to conduct a parallel analysis using propensity score matching (PSM) to provide support for the dosage model. However, this analysis only makes it possible to look at a single outcome. The outcome study will, therefore, also check the reliability of its dosage modeling approaches in one state for one central outcome, a 0/1 indicator of full-term versus preterm birth. The previous evaluation taught us that it is not feasible to obtain data use agreements with all states. This parallel analysis limits the number of data use agreements required while helping to ensure the consistency of the basic dosage model. Reliability checks will use a comparison group methodology, choosing comparison groups from VRO data using PSM. Creating a counterfactual in this manner and contrasting the resulting impact findings to the original dosage model results will test the original methodology where its greatest risk of bias lies—in the possibility that some of the trend in outcomes modeled as dosage in fact results from differences in the types of mothers and children being compared. Impact estimates based on a vital records comparison group will not include the dosage covariate and potential confounding factors. Moreover, the VRO system can supply a large sample of mothers who do not participate in HS as potential comparison group members.


Controlling for background characteristics should minimize this risk but there will likely be other unmeasured characteristics of the mothers and infants that matter to outcomes and could create bias. To address this, the evaluation will, in one state, create alternative impact estimates that do not depend on the outcome-dosage relationship. These will be comparison group findings from a state where a large sample of mothers who do not participate in HS can be obtained from vital records data. Based on documentation of the HSMED and VRO data files, this analysis will use six predictor variables: woman’s age; race; woman’s educational attainment; infant death in a prior pregnancy; preterm birth in prior pregnancy; twins or multiples in current pregnancy. Even with alignment of these six variables, bias could still arise in the comparison group-based impact estimates. For example, HS participants likely on average exceed non HS mothers from VRO in their motivation to obtain good natal outcomes leading to more favorable outcomes – the upper bound of the impact estimate. Or, because the set of mothers in the VRO encompasses substantially fewer high risk groups, the outcomes for HS women may be less favorable – the lower bound estimate of impact. Two versions of this analysis should be undertaken, one where the bias, should it occur, is positive and the other where the bias is negative. Then, should findings from the dosage analysis fall between these results, the dosage approach and its model will be supported. If the dosage model findings are incompatible with the bounds of the comparison analysis, a switch will be made to the next-best dosage model, and if necessary, a third-best model. The first successful model can then be implemented in all states using exclusively HS participant data. Obtaining impact assessments based on vital records from multiple states may be further beneficial for the support of the basic dosage model, but given the additional resources and time needed for obtaining such VRO data, we did not consider it cost-effective and essential.

In addition to the outcome component, an attempt has been made to capture some of the unique contributions of HS programs with additional methods such as the case study approach. In the case study approach, we will use a small sample of up to 15 selected grantee programs that have a larger number of participants and established Community Action Networks to examine best practices shown by some grantees that may be applicable to other HS communities. The rationale for selecting these grantees is because they are evaluable, as they can contribute the data needed for this component of the evaluation. While the case study approach may limit generalizability, the goal is to identify successful strategies and processes of addressing challenges in program implementation that can be shared and adopted by less experienced grantees.

This comprehensive assessment is subject to some data limitations that may be challenging for the evaluation, including: (1) program data on HS participants collected by grantees may be subject to missing or incomplete data; (2) grantees may be unwilling to participate in the different data collection activities (surveys and interviews for case studies, additional participant data for enhanced dosage analysis) due to such factors as staff turnover and lack of staff resources; and (3) the inability of the evaluation contractor (due to the contract with HRSA) to provide monetary incentives to boost survey response rates.

We will mitigate these challenges using a variety of strategies. To address missing or incomplete data, we will develop decision rules that specify the data elements that are included (and not included). To maximize participation of the Healthy Start sites in the evaluation, the evaluation contractor has been engaging with the grantees to obtain their input in the evaluation and share findings at different meetings to highlight the importance of their contributions to this evaluation. Going forward, the evaluation contractor will continue to work with the grantees and be available to provide assistance. The assistance will include a dedicated toll-free telephone number and email address to answer questions and concerns with the goal to minimize burden as much as possible.

2. Procedures for the Collection of Information

Healthy Start Program Survey

The Healthy Start Program Survey will be conducted with all Healthy Start grantees over a period of two months in the second year of the evaluation. The survey is designed to be self-administered through a web-based application by Healthy Start staff. The web-based application will allow respondents to stop and save the survey and return to it later, reducing burden as they may complete it at their convenience. In addition, internal skip patterns and range checks will be programmed into the survey to ensure the accuracy of data and that respondents do not answer questions unnecessarily. All Healthy Start project directors will be emailed a link to the survey for completion. Once they complete the survey, they will click on a submit button. The web-based application will flag incomplete surveys weekly and grantees will receive email reminders to complete the survey.

Healthy Start Network Survey

The Healthy Start Network Survey will be conducted over a three-month period at 15 Healthy Start sites selected for case studies. Each case study site will be asked to provide names and email addresses of their active CAN members. The survey will be fielded in the third year of the evaluation to a total of 600 respondents. The survey is designed to be self-administered through a web-based application by CAN members. The survey will take approximately 20 minutes to complete. The web-based application will allow respondents to stop, save the survey, and return to it at a later time, thus reducing burden as they may complete it at their convenience. In addition, internal skip patterns and range checks will be programmed into the survey to ensure the accuracy of data and that respondents do not answer questions unnecessarily. Active CAN members will be emailed a link to the survey for completion. Once CAN members complete the survey, they will click on a submit button and web programming staff will be informed that the CAN member completed the survey. The web-based application will flag incomplete surveys weekly, and CAN members will receive email reminders to complete the survey.

Healthy Start Participant Survey

The Healthy Start Participant Survey will be conducted over a two-month period at the15 Healthy Start sites selected for the case studies. The survey will be fielded in the third year of the evaluation to a total of 750 Healthy Start participants. The project directors at the 15 case study sites will be asked for the email addresses and phone numbers of currently enrolled and active Healthy Start participants. We will email participants with a web survey link for completion, and the email will also contain contact information (an email address and a telephone number) for assistance to complete the survey. If grantees do not wish to share participant email addresses, the program staff will be asked to forward the email about the survey to participants. The survey is designed to be self-administered through a web-based application by Healthy Start participants. The survey will take approximately 15 minutes to complete. The web-based application will allow respondents to stop, save the survey, and return to it at a later time, thus reducing burden as they may complete it at their convenience. In addition, internal skip patterns and range checks will be programmed into the survey to ensure the accuracy of data and that respondents do not answer questions unnecessarily. For participants who have difficulty completing the survey over the web, assistance will be offered to help them complete the survey over email or by telephone.


Healthy Start Stakeholder Interview Guide

Key informant interviews will be conducted using the Healthy Start Stakeholder Interview Guide during site visits to the 15 Healthy Start sites selected for the case studies in the second and third year of the evaluation. At each site visit, we will schedule meetings to conduct individual or small group interviews with the program director, data/evaluation team members, case managers, fatherhood coordinator, CAN coordinator, outreach staff, and other identified staff responsible for project activities. Interviews lasting approximately 45 minutes will be conducted with these key informants at each site. We anticipate up to 10 interviews per site for a total of 150 interviews across the 15 selected Healthy Start case study sites. At each site, we will attempt to schedule interviews to take place over two days and within regular work hours. The two-person interview team will include a senior team member to lead the interviews and a junior member to help schedule and facilitate the interviews. We will audio-record the interviews, if key informants agree, and transcribe the recordings. Interviews that cannot be scheduled during the site visits will be completed virtually at a time convenient to the interviewee.



Information collection schedule

Table B.2 summarizes the information collection schedule. After OMB approval is received, recruitment and consent procedures will be adapted as needed for each site.

Table B.2. Information Collection Schedule

Task

Time Schedule

Healthy Start Participant Survey

Administer the survey to grantees

September – November 2023

Analyze survey data

December 2023 – January 2024

Prepare report and brief stakeholders

February – March 2024

Healthy Start Network Survey

Administer the survey

October 2023 – January 2024

Analyze survey data

February – April 2024

Prepare report and brief stakeholders

May – June 2024

Healthy Start Program Survey

Administer survey

January 2024 – February 2024

Analyze survey data

March – May 2024

Prepare report and brief stakeholders

June – July 2024

Healthy Start Stakeholder Interviews

Conduct key stakeholder interviews

September 2023 – April 2024

Analyze interview data

September 2023 – June 2024

Prepare report and brief stakeholders

July - August 2024

Final Report

Prepare and submit final report

September 2024

Presentation of final report to HSRA

September 2024


3. Methods to Maximize Response Rates and Deal with Nonresponse

The ability to gain the cooperation and participation of potential respondents is important to the success of the Healthy Start evaluation. Methods to maximize these response rates and minimize nonresponse are presented below.


Engaging Grantees Prior to and During the Evaluation

To introduce Healthy Start grantees to this evaluation, HRSA’s evaluation contractor presented the overall design and the plan for the data collection activities at the annual Healthy Start grantee meeting held virtually in November 2021 by HRSA. Additional sessions on the evaluation will be presented to the grantees through HRSA’s Healthy Start Technical Assistance (TA) and Support Center. The evaluation contractor will further provide ongoing findings from the evaluation and evaluation technical assistance throughout the data collection period. The evaluation contractor will also develop communication products such as fact sheets of findings or infographics of key results to disseminate to the grantees.

Request Potential Respondents to Participate in Surveys through Trustworthy Sources

All potential respondents will be requested to participate in data collection activities by familiar and trustworthy individuals to maximize response rates for each survey:

  • For the Program Survey, HRSA’s project officers will send an email to all Healthy Start program directors informing them that they will receive this survey from the evaluation contractor and request that they participate.

  • For the Network Survey, the HRSA project officers of the selected case study sites will send an email to the Healthy Start program directors requesting that they notify their active CAN members via email about the upcoming survey. The evaluation contractor will email the survey to the CAN members.

  • For the Participant Survey, the evaluation contractor will contact the program directors of the selected 15 Healthy Start programs that constitute case studies to request the email addresses of their currently enrolled participants. The contractor will also ask program directors or program staff to notify participants about the upcoming survey. The contractor will send the web survey link to the participant email addresses provided and track the survey responses for follow up reminders. Alternatively, if the program directors prefer that the program communicate directly with their participants, they or designated program staff will send the emails with the survey link to their Healthy Start participants, follow-up with the participants to remind them to complete the survey as needed and send confirmations to the contractor that they sent the survey link and conducted follow-up as needed.



The contractor will also work with trusted sources such as project directors, project officers, and program staff to overcome barriers to participation and minimize non-response bias. Psychosocial factors, internet access, transportation, and life stressors are some of the many factors that may influence response rates. Other issues such as paternity, parole, and documentation status may make certain individuals less likely to respond. The contractor and program staff will ensure respondents that any personal information collected (e.g., names, phone numbers, email addresses) will be used for the purposes of survey administration only and will be kept confidential. Personal information will be kept in secure and password-protected computers and will not be shared with anyone outside of the contractor. In addition, the contractor will not include any information in reports that can identify anyone who takes part in the surveys. Other personal information (e.g., home addresses) that are not needed for survey administration will not be collected.

Minimizing Nonresponse

Healthy Start Program, Network, and Participant Surveys. The previous Healthy Start Program Survey conducted with grantees (OMB #0915–0338) had a response rate of 95 percent,2 and we expect a similar response rate for the current survey as well. Based on previous experiences conducting Network Surveys with organization partners, and taking into account the general reduction in survey response rates, we expect a response rate between 50 and 70 percent for the Healthy Start Network Survey. A previous Healthy Start participant survey had a response rate of 66 percent. 3 Based on that experience, and taking into account the general reduction in survey response rates, we are aiming for a response rate at Healthy Start sites of 80 40-60 percent. We anticipate this response rate because we will use a mixed methods approach that includes contacting participants via email to complete the web-based survey and providing opportunities to complete forms via email or phone.

Although we do not expect issues with responses, all three self-administered web-based surveys will allow programs to stop, save their responses, and return to the survey at their convenience, encouraging completion. Respondents will be able to complete the surveys on their computers, tablets, or phones. In addition, clear instructions with an email and telephone number for a help desk will be provided to answer any questions that respondents may have. Implementing the form in a web-based application will provide a way to collect high quality and consistent data and minimize burden by: (1) routing respondents through the form, thus avoiding pathing errors; (2) including range checks so that out-of-range values are checked and flagged for respondents to correct immediately; and (3) including consistency checks to ensure that the respondent’s answers are consistent throughout the questionnaire. We will develop clear instructions and program the web-based application to be as intuitive as possible to minimize time for potential respondents to understand the process of completing the survey. During the field period, the web-based application will automatically send regular reminders to those that have not completed the survey.

We expect that respondents to the Program Survey and Network Survey will have internet access and the equipment to complete the surveys online. For the Participant Survey, we will provide alternative methods to complete the web survey for enrolled Healthy Start participants without internet access or equipment. For participants with a landline or an unlimited cell phone plan, we will have a 1-800 toll-free phone number which they can call to complete the survey. For those without a landline or computer, and those with limited cell phone plans, we will work with the Healthy Start program directors or program staff to enable participants to use computers at the Healthy Start sites to access the web survey when they come in for services. Participants may also use our 1-800 toll free line should they prefer to complete the survey over the phone using the landline from their Healthy Start site. We will request that the participants who access the survey on-line or by phone at their Healthy Start site be provided with space where they can confidentially complete the survey. In addition, for the non-responsive participants who do not contact us for assistance with the survey, we will attempt to reach them by phone if their phone number is available, and we will also ask the program staff for assistance with contacting the participants about the survey as needed. Once we are able to reach the participants, we will ask them about their preferred way to take the survey and will facilitate the completion of the survey.

Healthy Start Stakeholder Interviews. The interviews will be completed during the visits to the 15 case study sites. Interviews will be scheduled at the convenience of the key informant. We will complete the interviews virtually, or by phone, if travel logistics or unavailability of the key informant make it difficult to hold the interviews during the site visit. A response rate of 70-80 percent is expected for key informants during the site visits based on experience with similar activities and a typically high level of motivation from the among Healthy Start staff with heavy workloads..

4. Tests of Procedures or Methods Undertaken

The evaluation contractor carried out pilot tests of the Healthy Start Program, Network, and Participant Surveys and the Stakeholder Interview Guide with nine respondents for each instrument. Key findings from the pilot test for each instrument are discussed below. Attachment B6 provides details of the pilot testing results and recommended changes to the instruments.

Healthy Start Program Survey. The time to complete the survey was an average of 30 minutes during the pilot test. The respondents, however, noted that they would need time to gather some of the information requested to enter in the survey. We, therefore, increased the estimated time to 60 minutes so respondents can have an additional 30 minutes to gather information from their records, where needed. Based on feedback from the pilot test, we also made revisions to improve flow, clarity, and web navigation. For example, we revised programming instructions to freeze the top row of tables so that response options are visible as respondents scroll down. We increased the character limit for the open-ended responses, as some respondents wanted to provide detailed responses for some questions, and we added instructions directing respondents to select one response per row in tables with multiple response options to minimize skipping of items. We also deleted some questions and added a few others, as recommended by respondents, such as the impact of COVID-19 on Healthy Start activities. We made some wording changes to make questions and response options clearer and used inclusive language, where appropriate. We have also developed a Frequently Asked Questions (FAQs) document to be sent in advance to the program directors (Attachment B7). Respondents requested to know about the type of questions they would be required to answer and the information they would need to extract from their data systems or through consultation with their program staff prior to responding to the survey. We also revised the survey introduction to include a hyperlink to the FAQs.

Healthy Start Network Survey. The time to complete the network survey during the pilot test was fairly consistent with the estimated survey length (20 minutes). Based on the feedback on the pilot test, we revised the survey to clarify concepts and questions. We added definitions of both the Community Action Network (CAN) and Healthy Start in the introduction of the survey because grantees and their community partners may refer to the CAN and Healthy Start by different local names. In addition, we developed FAQs and revised the survey introduction to include a hyperlink to the FAQs (Attachment B8). Furthermore, we added a “Don’t Know” option to certain questions so that respondents can select it if they do not have an answer to the question and do not skip the question. We have also added new questions at the beginning of the survey to confirm who the respondents represent in the CAN (i.e., themselves or an organization), and respond to the survey questions from this perspective. We added programming instructions to direct respondents to specific questions based on this initial response. We also re-ordered some questions, revised some response options and wording of questions for greater flow and clarity.

Healthy Start Participant Survey. The time it took to complete the survey was consistent with the estimated survey length (15 minutes). We revised some questions for greater clarity and changed the wording in some questions to reflect inclusive language, where appropriate. We also added some questions on COVID-19 with regard to participation in Healthy Start-related activities.

Healthy Start Stakeholder Interview Guide. The time it took to complete the interviews was consistent with the estimated average length (45 minutes) of the interviews. We deleted one question, as participants thought it covered the same topic as another question. The pilot test revealed that some stakeholders would not be able to answer questions if they were not directly engaged in program activities related to the question. In response, we included an instruction for the interviewer to skip such questions if they were not relevant to that particular stakeholder. We also revised a question for the interviewee to focus on key activities rather than all activities conducted to reduce interview time.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Consultations on the evaluation design, data collection instruments and protocols, survey and interview questions, data management, and analysis of this evaluation occurred throughout the planning phase of the project. HRSA/MCHB staff were consulted about the evaluation design and methodology for the study. These consultations provided the opportunity to ensure the technical quality and appropriateness of the overall evaluation design and data analysis plans, obtain advice and recommendations concerning the data collection instruments, and structure the evaluation and instruments to minimize overall and individual response burden. Consultations occurred with the following individuals in connection with this study (listed in alphabetical order in Table B.3). The consultative roles that the individuals played in the evaluation also are included in parentheses. Their recommendations were incorporated into the study design and instruments on an ongoing basis. The person responsible for receiving and approving the instruments and information collection is Kimberly Burnett-Hoke (MCHB COR), with subject matter expertise input from the following MCHB staff: Maura Dwyer, Ada Determan, Sarah Barrett, and Anne Leong.

Table B.3. Individuals Consulted

Name

Affiliation

Sarah Barrett, MPH

(evaluation design, data collection, and analysis)


Division of Healthy Start and Perinatal Services (DHSPS)

Maternal and Child Health Bureau (MCHB)

Health Resources and Services Administration (HRSA)

Sbarrett@hrsa.gov

Kimberly Burnett-Hoke, COR

(contract management & approval of contract deliverables starting Sept 2022)

Division of Healthy Start and Perinatal Services (DHSPS)

Maternal and Child Health Bureau (MCHB)

Health Resources and Services Administration (HRSA)

kburnett-hoke@hrsa.gov

Clara Busse, PhD

(ORISE intern supporting design, data collection and analysis until June 2022)

Office of Epidemiology and Research

Maternal and Child Health Bureau

Health Resources and Services Administration


Ada Determan, PhD, MPH

(evaluation design, data collection, and analysis)

Division of Healthy Start and Perinatal Services (DHSPS)

Maternal and Child Health Bureau (MCHB)

Health Resources and Services Administration (HRSA)

adeterman@hrsa.gov

Maura Dwyer, DrPH, MPH

(evaluation design, data collection, and analysis starting August 2022)

Division of Healthy Start and Perinatal Services (DHSPS)

Maternal and Child Health Bureau (MCHB)

Health Resources and Services Administration (HRSA)

Mdwyer@hrsa.gov

Judy Harvilchuck, PhD

(COR until July 2022)

Division of Healthy Start and Perinatal Services

Maternal and Child Health Bureau

Health Resources and Services Administration

Anne Day Leong, PhD, MSW

(evaluation design, data collection and analysis)

Office of Epidemiology and Research

Maternal and Child Health Bureau

Health Resources and Services Administration

ALeong@hrsa.gov



Individuals Collecting and/or Analyzing Data

     Westat is the evaluation contractor. Westat staff and their consultants designed the evaluation and the data collection instruments in consultation with HRSA/MCHB staff listed in Table B.3. Westat staff will lead the data collection and analysis efforts presented in this OMB package, in collaboration with HRSA/MCHB’s staff. Table B.4 provides a list of Westat’s evaluation team that will be involved in this effort and specifies each member’s role and contact information.

Table B.4. Westat (Contractor) Evaluation Team

Name and Role

Contact Information

Sarah Ball, ScD, MPH (evaluation design, data collection, and analysis)

Westat (contractor)

240-314-2359

sarahball@westat.com

Angela Cheung, MPH

(data collection and analysis)

Westat (contractor)

404-383-0482

angelacheung@westat.com

Robyn Ferg, PhD

(data analysis)

Westat (contractor)

240-453-5642

robynferg@westat.com

Katherine Flaherty, ScD, MA

(evaluation design, data collection, and analysis)

Westat (contractor)

508-613-5990

katherineflaherty@westat.com

Carly Hallowell, MPH

(data collection and analysis)

Westat (contractor)

717-368-8851

carlyhallowell@westat.com

Grace Huang, PhD, MPH

(evaluation design, data collection, and analysis)

Westat (contractor)

301-517-4047

gracehuang@westat.com

Kristen Keating, PhD, MPH

(data collection and analysis)

Westat (contractor)

301-738-3591

kristenkeating@westat.com

Salome Kiduko, MPH

(data collection and analysis)

Westat (contractor)

404-777-9443

salomekiduko@westat.com

Milton Kotelchuck, PhD, MPH

(evaluation design and data analysis)

Westat (contractor)

617-877-4225 
mkotelchuck@pmgh.harvard.edu

Jean Opsomer, PhD, MBA, MS

(data analysis)

Westat (contractor)

301-738-3577

jeanopsomer@westat.com

Saloni Sapru, PhD, MA

(evaluation design, data collection, and analysis)

Westat (contractor)

240-314-2363

salonisapru@westat.com

Mallorie Smith, BS

(data collection and analysis)

Westat (contractor)

240-314-2520

malloriesmith@westat.com

Zachary Weber, PhD, MS

(data analysis)

Westat (contractor)

(240) 314-2576

zacharyweber@westat.com






1 Secondary data from HRSA’s Discretionary Grants Information System (DGIS) and the Healthy Start Monitoring and Evaluation Data system (HSMED) to which grantees report data on their programs’ performance measures and on their clients/participants for other purposes will be used to identify grantees that meet the criteria.

2 Parasuraman, S.R., de la Cruz, D. Evaluation of the Implementation of the Healthy Start Program: Findings from the 2016 National Healthy Start Program Survey. Matern Child Health J 23, 220–227 (2019). https://doi.org/10.1007/s10995-018-2640-9

3 Rosenbach, M., S. O’Neil, B. Cook, L. Trebino, and D. Klein Walker. “Characteristics, Access, Utilization, Satisfaction, and Outcomes of Healthy Start Participants in Eight Sites.” Maternal and Child Health Journal, 2010, 14(5):666–79.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleInstructions for writing Supporting Statement B
AuthorJodi.Duckhorn
File Modified0000-00-00
File Created2023-09-10

© 2024 OMB.report | Privacy Policy