SSA - Supporting Coordinated Benefits Delivery to Foster Whole Family Approaches

BWFA_PRA Generic_Formative for ACF Program Support - 2024.10.08_fnl.docx

Formative Data Collections for ACF Program Support

SSA - Supporting Coordinated Benefits Delivery to Foster Whole Family Approaches

OMB: 0970-0531

Document [docx]
Download: docx | pdf



Supporting Coordinated Benefits Delivery to Foster Whole Family Approaches



Formative Data Collections for Program Support


0970 – 0531




Supporting Statement

Part A - Justification

October 2024


Submitted By:

Immediate Office of the Assistant Secretary (IOAS)

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201










A1. Necessity for the Data Collection

The Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for data collection to inform the development of a comprehensive toolkit (“the Toolkit”) for state and local (SL) agencies to support their efforts to coordinate or integrate benefits administration and delivery to improve experiences of families served by multiple benefits.


Background

ACF promotes the economic and social well-being of families, children, youth, individuals, and communities, working towards a vision of children, youth, families, individuals, and communities who are resilient, safe, healthy, and economically secure. ACF supports families through programming that includes child care subsidies, child support services, healthy marriage and responsible fatherhood programming, domestic violence prevention, refugee resettlement, Head Start, Temporary Assistance for Needy Families (TANF), and more. ACF aims to provide services in a manner that promotes equity, supports prevention, and serves whole families in ways that empower them not only to survive but to thrive (Contreras 2023). While ACF programs share this common vision and often serve the same families, they are governed at the federal level by over a dozen program offices, and they have varying eligibility rules, definitions, and funding streams both at the federal level and at the SL levels at which they are administered.


Individuals seeking benefits and services from ACF-funded programs therefore face a complex system of applications and requirements for each program for which they qualify. During active engagement related to the development of ACF’s Strategic Plan, ACF heard that families experience programs as siloed and hard to access. ACF also heard that having to access multiple services through different entry points means that families must tell their stories repeatedly, a process that can be re-traumatizing for families who are asked repeatedly about their challenges (Chang 2022). Families face these issues when interacting with multiple ACF programs, as well as with other programs in the broader social safety net.


Recognizing this, ACF has, in recent years, increased its public commitment to implementing whole family approaches to service delivery and promoting integrated and coordinated services. ACF’s strategic plan, released in January 2022, includes five strategic goals that intentionally cut across ACF programs and populations to reflect the interrelatedness of programs and to promote a whole-family approach. ACF hopes to serve families more seamlessly across the lifecycle of their interactions with benefits programs to provide the strongest possible support for families (ACF, 2022).


Proposed Information Collections to Support Efforts

This project builds on ACF’s efforts to promote seamless programming for whole families by supporting improved coordination and integration of human service programs. Specifically, this project will inform the development of a Toolkit for SL entities to support coordination of human services programs; the project team will pilot test the Toolkit with 8 interested SL agencies for up to 9 months.


During the pilots, the project team will assist participating agencies in identifying gaps and next steps in their benefits coordination process and, through an iterative process, will work together to refine the Toolkit and expand its utility. Ultimately, this project supports ACF’s goals of increasing access, easing burden, and improving outcomes for children and families served by ACF programs.


Legal or Administrative Requirements that Necessitate the Collection

Congress provided funding to ACF in Fiscal Year 2023 for this demonstration program. ACF’s Fiscal Year 2024 Justification of Estimates for Appropriations Committees states, “The whole-family approaches to service delivery demonstration will develop a readiness assessment tool for state, local, territorial, and tribal agencies that are interested in coordinating two or more of their human services benefit programs, pilot test the tool with three to five interested agencies to assist them in identifying gaps and next steps in their benefits coordination process, and provide recommendations for refining the tool and expanding its use.”


A2. Purpose of Survey and Data Collection Procedures

Overview of Purpose and Use

Throughout the pilots, we will gather information from SL team members to inform planned revisions to the Toolkit, improve our coaching approach during the pilot period, and develop recommendations for supporting wider use of the Toolkit after the pilot. All our information collection tools are designed to illuminate what about the Toolkit is working well, what is not working, and how elements of the Toolkit can be improved. We will also solicit information about how sites’ benefits coordination efforts are proceeding and challenges they are facing. The information from these pilots will be used by ACF to finalize the Toolkit, which will then be made freely available as a resource for state and local administrators of benefits programs, including but not limited to programs administered by ACF.

The project will involve two sources of data collection:

  1. Toolkit Feedback and Reflection (TFAR) Questionnaire (Instrument 1). We will administer a brief automated qualitative web-based questionnaire to each pilot’s site lead(s) and up to one administrator at a partner agency up to twice each month following a pilot coaching or technical assistance session. The questionnaire primarily includes open-ended questions that asks SL team leads to describe key activities since the last time they filled out the instrument (or since pilot launch), successes and challenges in advancing their coordination goals, how they used the Toolkit and other materials developed to support their activities, and components of the Toolkit and other materials provided that they found more or less useful. The few closed-ended questions we include will be used to contextualize the open-ended responses for analysis and/or to direct the instrument to skip to questions relevant to the respondent and will not be tabulated and reported upon. We will summarize qualitative themes from open-ended responses to questions and will not conduct any quantitative tabulation. The primary use of information from this source will be to inform revisions to the Toolkit. This feedback will also be used by the project team to inform subsequent coaching agendas to better support the sites in using the Toolkit. We may also share aggregated findings with federal leadership and staff and pilot sites. When we share findings, we will describe the study methodology and clearly document limitations regarding generalizability.


  1. Interviews (Instrument 2). We will conduct interviews with key SL team members mid-way through the pilots. Findings from these interviews will be used by the project team to inform subsequent coaching agendas to better support the sites in using the Toolkit. Feedback will also inform revisions to the Toolkit, which will be made public after the project team enacts and receives approval of updates following the pilots. We may also share aggregated findings with federal leadership and staff, and pilot sites. We will also conduct interviews with key SL team members at the end of the pilots. These interviews will capture experiences of staff involved in using the Toolkit beyond site leads and add nuance to our understanding of how the Toolkit and the project team’s coaching supported SLs in furthering their coordination efforts. This set of interviews will allow us to capture key team members’ reflections on their full experience using the Toolkit as part of the pilots. Findings from these interviews will be used to inform revisions to the Toolkit, which will be made public after the project team enacts and receives approval of updates following the pilots. We may also share aggregated findings with federal leadership and staff, and pilot sites. When we share findings, we will describe the study methodology and clearly document limitations regarding generalizability.


This proposed information collection meets the following goals of ACF’s generic clearance for formative data collections for program support (0970-0531):

  • Delivery of coaching related to development or refinement of program and grantee processes, specifically in the form of supporting SL’s use of the Toolkit to support coordination and integration.

  • Planning for provision of coaching to support use of the Toolkit.

  • Obtaining feedback about SL’s experiences using the Toolkit to inform ACF support through provision of an updated Toolkit for the field.


Processes for Information Collection

Web-based questionnaire. Following a pilot coaching or technical assistance session, we will send an email (Appendix A) with a link to the TFAR questionnaire (Instrument 1) to up to one site lead and another administrator at a partner agency per SL. This will occur up to twice per month during the 9-month coaching period. Coaches working with each SL team, in coordination with project and pilot leadership, will identify if a SL team should complete the survey more frequently than monthly based on the pace of coaching sessions and SL team efforts. The questionnaire primarily includes open-ended questions with a few closed-ended questions to contextualize the open-ended responses for analysis and/or to direct the instrument to skip to questions that are relevant to the respondent. The close-ended questions will not be tabulated and reported upon. We will use Qualtrics to program the instrument and capture responses. The questionnaire is designed to take no more than fifteen minutes to complete.

Interviews. We will conduct two rounds of interviews. We will conduct a first round of interviews with key SL team members midway through the pilots. We will conduct these interviews virtually via a secure video conferencing platform such as Microsoft Teams or Zoom, or in person during site visits. We will work with site leads to identify key staff members involved in using the Toolkit to further coordination efforts across agencies or offices. We will invite these staff via email to participate and will schedule the interviews at times convenient to invitees. We will use a semi-structured protocol to guide the interviews designed to capture in-depth information on their experiences with the pilot up to the point of the interview, as well as their mid-pilot reflections on how the Toolkit could be improved. We will take notes and audio record the conversations if participants grant us permission.


We will also conduct interviews with key SL team members just before pilots conclude. We will conduct these interviews virtually via the secure video conferencing platform Microsoft Teams or Zoom, or in person during site visits. We will work with site leads to identify key staff members involved in using the Toolkit to further coordination efforts across agencies or offices. When asking questions of site leads, we will adjust the questions asked of site leads to reduce redundancy of questions with the TFAR questionnaire where appropriate. We will use the same semi-structured protocol as we use for the first round of interviews to guide these second-round interviews, designed to capture in-depth information on their experiences with the pilot and reflections on how the Toolkit may be improved. We will ask questions from the protocol relevant to sites’ pilot activities up to the point of the interview. We will take notes and audio record the conversations if participants grant us permission. The interview instrument is included at an attachment to this submission (“Instrument 2: Interview Protocol”).


A3. Improved Information Technology to Reduce Burden

The project team will use technology to reduce burden throughout each instance of proposed information collection. Data collection will take place virtually where its use will ease burden on participants. We will use email to invite pilot site leads and administrators at partner agencies to complete web-based questionnaires and pilot site team members to participate in interviews. Questionnaire respondents will fill out the instrument via Qualtrics, and virtual interviews will be held on Microsoft Teams or Zoom.


A4. Efforts to Identify Duplication

The information collection process includes several measures to identify duplication. The study included an extensive preparatory work phase, which assessed the existing landscape of research, resources, and current or recent benefits coordination efforts, providing a foundation for the development of the Toolkit. Building off that phase, the team is developing a Toolkit that fills gaps in existing efforts. Therefore, the proposed information collection intended to inform revision to the Toolkit will not be duplicative of other efforts.


A5. Involvement of Small Organizations

No small business will be involved with this information collection.


A6. Consequences of Less Frequent Data Collection

The recurrences of information collection proposed are designed to maximize utility of information collection to ACF. We have proposed engaging in these information collection activities at strategic points that will best serve ACF’s goals while not being overly burdensome on respondents. Collecting questionnaires less frequently and holding fewer interviews would leave the project team less able to provide quality, tailored coaching to pilot sites or to inform revisions to the Toolkit based on emergent findings throughout the pilot.


A7. Special Circumstances

There are no special circumstances for the proposed data collection efforts.


A8. Federal Register Notice and Consultation

Federal Register Notice and Comments

In accordance with the Paperwork Reduction Act of 1995 (Pub. L. 104-13) and Office of Management and Budget (OMB) regulations at 5 CFR Part 1320 (60 FR 44978, August 29, 1995), ACF published a notice in the Federal Register announcing the agency’s intention to request an OMB review of this information collection request to extend approval of the umbrella generic with minor changes. The notice was published on January 28, 2022, (87 FR 4603), and provided a sixty-day period for public comment. ACF did not receive any comments on the first notice. A second notice was published, allowing a thirty-day period for public comment, in conjunction with submission of the request to OMB. ACF did not receive any comments on the second notice.

Consultation with Outside Experts

In the preliminary work phase of the project, the team conducted individualized key informant interviews with outside experts. The team interviewed:

  • Technical assistance providers and coordination experts who have developed and deployed similar toolkits and resources

  • Administrators and practitioners who are potential users of the Toolkit and have previous experience conducting coordination efforts.


To better understand the needs of SLs, including what their engagement in pilots could look like, we also conducted a series of virtual sessions. We assembled a group of policymakers, system and program leaders, and people with lived experience navigating these systems who are potential end-users of the resource.


None of these activities involved uniform data collection with over 9 people and therefore did not require OMB clearance.


A9. Tokens of Appreciation for Respondents

No tokens of appreciation for respondents are proposed for this information collection.


A10. Privacy of Respondents

We plan to collect personally identifiable data on respondents’ roles on their pilot teams during interviews and on the pilot site they represent while administering TFAR questionnaires.


Information collected will be kept private to the extent permitted by law. Respondents will be informed of all planned uses of data, that their participation is voluntary, and that their information will be kept private. We will provide consent language to each web-based questionnaire and interview participant and will obtain their written consent to participate before proceeding with each information collection activity.


Information will not be maintained in a paper or electronic system from which data are actually or directly retrieved by an individuals’ personal identifier.


A11. Sensitive Questions

There are no sensitive questions in this data collection.


A12. Estimation of Information Collection Burden

Burden Estimates

We have estimated burden for the proposed information collection based on extensive information about the instruments we propose to use, types of expected respondents, and points in time when we expect to collect information. Specifically, estimates are based on the following assumptions:

  • TFAR questionnaire: 15-minute instrument administered up to twice per month over 9 months to 1 site lead (anticipated to be a mid-to-high level human services program administrator) at each of 8 pilot sites and up to 1 mid-to-high level administrator at a partner agency.

  • Interviews: Two rounds of 60-minute interviews with teams at each of 8 pilot sites, with one round being at the mid-point of the pilot and one at the end of the pilot. Interviews will be conducted with the site lead (mid-to-high level administrator) and 2 mid-to-high level administrators at a partner agency (averaging 6 per site, including the site lead), 2 supervisory level staff per pilot partner (averaging 6 per site) and 2 frontline-level staff per partner (averaging 6 per site).



Cost Estimates

The cost to respondents was calculated using the following Bureau of Labor Statistics (BLS) job codes:

  1. For mid-to-high level human services program administrators, we use “General and Operations Managers,” [11-1021] and wage data from May 2023, which is $62.18 per hour. To account for fringe benefits and overhead the rate was multiplied by two which is $124.36. https://www.bls.gov/oes/current/oes111021.htm

  2. For supervisory-level staff, we use “Social and Community Service Managers,” [11-9151] and wage data from May 2023, which is $40.10 per hour. To account for fringe benefits and overhead the rate was multiplied by two which is $80.10. https://www.bls.gov/oes/current/oes119151.htm

  3. For frontline-level staff, we use “Eligibility Interviewers, Government Programs,” [43-4061] and wage data from May 2023, which is $24.92 per hour. To account for fringe benefits and overhead the rate was multiplied by two which is $49.84. https://www.bls.gov/oes/current/oes434061.htm


Instrument

Total Number of Respondents

Total Number of Responses Per Respondent

Average Burden Hours Per Response

Total

Burden Hours

Average Hourly Wage

Total Annual Cost

TFAR questionnaire (mid-to-high level human services program administrators)

16

18

.25

72

$124.36

$8,953.92

Interview protocol (mid-to-high level human services program administrators)

48

2

1

96


$124.36

$11,932.80

Interview protocol (supervisory-level staff)

48

2

1

96

$80.10

$7,689.60

Interview protocol (frontline-level staff)

48

2

1

96

$49.84

$4,784.64

Total Burden and Cost Estimates:

296

N/A

$33,360.96


A13. Cost Burden to Respondents or Record Keepers

There are no additional costs to respondents.


A14. Estimate of Cost to the Federal Government

The total cost to the federal government for the data collection activities under this current request will be $178,348.


A15. Change in Burden

This is for an individual information collection under the umbrella formative generic clearance for program support (0970-0531).


A16. Plan and Time Schedule for Information Collection, Tabulation and Publication

Data collection will take place during the pilot period and will continue for one month following the pilot period (to allow for completion of all required interviews and questionnaires). Pending OMB approval, data collection will start November 2024 and end August 2025.


A17. Reasons Not to Display OMB Expiration Date

All instruments will display the expiration date for OMB approval.


A18. Exceptions to Certification for Paperwork Reduction Act Submissions

No exceptions are necessary for this information collection.





Attachments

Instrument 1: Toolkit Feedback and Reflection (TFAR) Questionnaire

Instrument 2: Interview Protocol

Appendix A: Email Request to Complete TFAR Questionnaire

Appendix B: Email Invitation to Participate in Interview
























8


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOPRE OMB Clearance Manual
AuthorDHHS
File Modified0000-00-00
File Created2024-11-13

© 2024 OMB.report | Privacy Policy