OMB Information Collection Request
Evaluation of the ESSA Title I, Part D Neglected or Delinquent Programs
U.S. Department of Education
Office of Planning, Evaluation and Policy Development
Policy and Program Studies Service
August 2016
OMB
Information Collection Request
Part B
August 2016
Supporting Statement, Part B Paperwork Reduction Act Submission 1
B. Description of Statistical Methods 1
B2. Procedures for Data Collection 6
B3. Methods to Maximize Response Rate 8
B4. Expert Review and Piloting Procedures 9
Appendix A: State Form to List Local Coordinator Contact Information A-1
Appendix B: State Coordinator Questionnaire B-1
Appendix C: Local Coordinator Questionnaire C-1
Appendix D: Notification Letters D-1
Appendix E: Case Study Interview Protocols E-1
Appendix F: Request for Administrative Documents F-1
This study will include the following three samples, which will provide a rich set of data for addressing the study’s research questions.
State coordinator survey sample. An estimated 160 state coordinators. This survey will be a census of state coordinators
Local coordinator survey sample. A representative sample of 1,400 local coordinators and their partners (juvenile justice and child welfare facility coordinators).
Case study sample. A purposive sample of five states that are implementing their Part D programs will be selected for the set of case studies. Within each selected state, data will be collected at state and local levels and will include state education agencies (SEAs), state agencies (SAs) for juvenile justice and child welfare, and six subgrantees, two in each of the three categories: school districts, correctional institutions, and child welfare facilities. We anticipate that each state’s ND coordinator will identify the six subgrantees.
State Coordinator Survey. The population of inference for the state coordinator survey is the coordinators who oversee the state-level Title I, Part D programs in each state. In particular, the target population includes the SEA coordinators who oversee the whole Part D program for their state and the SA coordinators who oversee the Part D program for their respective agencies (e.g., juvenile justice, child welfare).
Sample attainment. Because a frame does not currently exist, study staff will use the state coordinators list compiled by the U.S. Department of Education (ED)–funded National Technical Assistance Center for the Education of Neglected or Delinquent Children and Youth (NDTAC).
Sample size and sampling plan. There are an estimated 160 state coordinators — one SEA coordinator in all 50 states plus the District of Columbia and Puerto Rico (52) and two or three SA coordinators per state (108). Because of the small number of state coordinators, it is important to seek data from all of them. Therefore, the state coordinator survey will be a census of the state coordinators; no sampling will take place.
Local Coordinator Survey. The population of inference for the local coordinator survey is the coordinators who oversee Title I, Part D–funded programs at the local level. The local coordinator population includes (1) the local education agency (LEA)/school district coordinators who are responsible for overseeing local program subgrants and (2) the juvenile justice (delinquent) and child welfare (neglected) facility coordinators at those facilities that report operating a program funded by Title I, Part D.
Sample attainment. Because a list (sample frame) of all local coordinators does not currently exist, the study team will use a two-step process to assemble this list. First, AIR will contact SEA coordinators through e-mail and ask each to submit the contact information for the LEA. Second, the study team will email LEA coordinators and request contact information for all local facility coordinators they work with. This information will all be collected through an online form on a secure website. The process involves two steps because SEA coordinators are unlikely to have contact information for all coordinators at the local level. As a result of this process, the list of local coordinators in each state will be received on a rolling basis. AIR will randomly select a sample of local coordinators stratified by type of facility or agency. The local coordinator contact information form that will be used by SEA and LEA coordinators is included in Appendix A.
Sample size. There are an estimated 3,000 local coordinators. A representative sample will be drawn (instead of taking a census) because this will be the most efficient use of project funds and should yield results that are equally — if not more — valid than those that would result from a census. The primary goal will be to produce a representative sample that will yield about 1,200 completed surveys. Under the assumption of obtaining an 85 percent response rate, this requires drawing a sample of just over 1,400 local coordinators.
Sampling plan. The study team will use a stratified sampling plan across states, with simple random sampling (without replacement) within each stratum. The following stratification variables will be used when drawing the sample: (1) “state” in which the program on which the coordinator works is located (the 50 states, DC, and Puerto Rico); (2) whether the individual is a school district coordinator or a facility coordinator; (3) program type (e.g., neglect, delinquent) classification of the program on which the coordinator works.1
Two additional stratification variables will be used in the sampling plan, but will apply only to either facility coordinators or school district coordinators. Among local delinquent (D) facility coordinators, there will be stratification by whether the duration is short (e.g., juvenile detention) or long (e.g., juvenile correction) with respect to youth’s length of stay/ involvement with the program. Though there is also variation in duration among neglect (N) programs, there is not as clear of a distinction between long and short programs of this type, making it unlikely that local N facility coordinators could be easily stratified by program duration.
If states are able to provide a program duration variable for N programs, then it is also possible to stratify local N facility coordinators by duration. In addition, school district coordinators will be stratified by whether or not they oversee programs in schools or community centers for the at-risk population.
Specifically we will use a stratified sampling allocation plan that will be a hybrid between equal allocation (in which equal numbers of local coordinators are sampled from each State) and proportional allocation (in which the number of local coordinators sampled from each State is proportional to the local coordinator population of the State).
Hybrid allocation is similar to proportional allocation except that States with 30 or fewer estimated local coordinators will be sampled with certainty. Hybrid allocation will give the most desirable balance between adequate coverage of small States and enough representation of large States (Exhibit B.1). By “small” and “large” we mean in terms of the number of local coordinators in the State.
Exhibit B.1. Number of Local Coordinators Sampled Per State
|
Estimated Size of Local Coordinator Population |
Sample Sizefor Hybrid Allocation |
Total |
3,000 |
1,449 |
Alabama |
74 |
31 |
Alaska |
7 |
7 |
Arizona |
5 |
5 |
Arkansas |
0 |
0 |
California |
448 |
188 |
Colorado |
26 |
26 |
Connecticut |
10 |
10 |
Delaware |
0 |
0 |
District of Columbia |
0 |
0 |
Florida |
197 |
83 |
Georgia |
0 |
0 |
Hawaii |
0 |
0 |
Idaho |
26 |
26 |
Illinois |
26 |
26 |
Indiana |
68 |
29 |
Iowa |
96 |
40 |
Kansas |
40 |
17 |
Kentucky |
60 |
25 |
Louisiana |
41 |
17 |
Maine |
5 |
5 |
Maryland |
20 |
20 |
Massachusetts |
58 |
24 |
Michigan |
106 |
45 |
Minnesota |
84 |
35 |
Mississippi |
18 |
18 |
Missouri |
43 |
18 |
Montana |
8 |
8 |
Nebraska |
8 |
8 |
Nevada |
17 |
17 |
New Hampshire |
5 |
5 |
New Jersey |
18 |
18 |
New Mexico |
48 |
20 |
New York |
326 |
137 |
North Carolina |
0 |
0 |
North Dakota |
7 |
7 |
Ohio |
215 |
90 |
Oklahoma |
121 |
51 |
Oregon |
45 |
19 |
Pennsylvania |
417 |
175 |
Puerto Rico |
0 |
0 |
Rhode Island |
0 |
0 |
South Carolina |
5 |
5 |
South Dakota |
30 |
30 |
Tennessee |
30 |
30 |
Texas |
144 |
60 |
Utah |
0 |
0 |
Vermont |
2 |
2 |
Virginia |
21 |
21 |
Washington |
43 |
18 |
West Virginia |
13 |
13 |
Wisconsin |
15 |
15 |
Wyoming |
5 |
5 |
Source: Certified data sent from U.S. Department of Education to NDTAC, May, 2015.
Hybrid allocation will also be reasonably straightforward to implement as long as basic counts of the number of local coordinators per State and stratum (N, short-term D, long-term D) are provided by SEAs. This will allow for determination of the distribution of local coordinators by stratification variables, and thus the number of local coordinators to sample per State, without needing to have detailed coordinator lists from all States. It will permit sampling and the survey administration to occur in waves.
The main advantage of the hybrid allocation approach relative to equal allocation is that it will lead to more precise estimates for large States than the equal allocation approach; this approach will allow proportions to be estimated for a large State, such as California, to within 0.0715 with 95% confidence (compared to 0.136 for the equal allocation approach (Exhibit B.2)). An advantage of this approach relative to proportional allocation is that a small state such as Idaho can be estimated with no sampling error. Finally, the precision of national-level estimates may be the best under this hybrid approach. This approach will allow proportions to be estimated at the national level with 95% confidence to within 0.0158 (as compared to 0.0214 for the equal allocation approach and 0.0200 for the proportional allocation approach). Confidence intervals are also provided in Exhibit B.3.
Hybrid allocation allots a larger sample than equal allocation to the following six large States: California, Florida, New York, Ohio, Pennsylvania, and Texas.
Exhibit B.2. Estimated Precision of Estimates, by Sampling Approach
|
Equal Allocation |
Proportional Allocation |
Hybrid Allocation |
National estimate precision |
0.0214 |
0.0200 |
0.0158 |
Large-State estimate precision (CA) |
0.136 |
0.0665 |
0.0715 |
Small-State estimate precision (ID) |
0.000 |
0.272 |
0.000 |
Exhibit Note: The estimated precision shown in this exhibit is for estimating proportions of 0.50 with 95% confidence (the proportion of “yes” answers to a yes/no question).
Exhibit B.3. Confidence Intervals for Estimates of Proportions p, by Proposed Sampling Approach
|
Equal Allocation |
Proportional Allocation |
Hybrid Allocation |
National estimate confidence interval |
p ± .042 |
p ± .039 |
p ± .031 |
Large-State estimate confidence interval (CA) |
p ± .267 |
p ± .130 |
p ± .140 |
Small-State estimate confidence interval (ID) |
p ± 0 |
p ± .533 |
p ± 0 |
Exhibit Note: The estimated precision shown in this exhibit is for estimating proportions of 0.50 with 95% confidence (the proportion of “yes” answers to a yes/no question).
A disadvantage of this approach is that the total count of local coordinators for each state must be received before stratification and sampling can be done. The study team must collect this information from SEAs before sampling can begin, though the sampling can be done with simple counts. No contact information for local coordinators is required in order to begin the process of selection.
The selection of a purposive sample of five states for the case studies will involve the following three phases:
Phase 1. Identifying state and local program structures and populations served and reviewing key services and strategies implemented in each state. The study will examine state-specific information available from Title I, Part D data reported in the federal Consolidated State Performance Report (CSPR) and, as needed, information from reports prepared by NDTAC in support of ED Part D program monitoring. For each state, the study team will record in a matrix variables such as state demographics, subgrantee types and characteristics (e.g., state or local, neglected or delinquent), and the number and demographics of children and youth participating in Part D–funded programs to be used as selection criteria. The goal is to capture a wide range of state Title I, Part D policies, program features, and participant characteristics.
Phase 2. Utilization of TWG expertise. The study team will present the data matrix of state-specific information (see Phase 1) and request TWG members’ additional expert feedback on each state’s appropriateness as a case study site. Based on the TWG’s guidance and ED’s recommendations, the list of potential states will be narrowed down to 10. The key criteria for developing the list are:
Part D program sufficiently scaled within the state (i.e., state that has a minimum of six subgrantees: two school districts, two local correctional institutions, and two child welfare facilities that are providing services to N or D youth in the state).
Diverse Part D program service populations (e.g., American Indian/Alaska Native, Hispanic)
Institutionwide Part D projects implemented in the state
In collaboration with the TWG and ED, the case study team will then rank-order these final candidate states and submit a final list of recommendations to ED.
Phase 3. Final selection. Finally, the study team will work with ED in the final selection of five states that as a group represent a range of implementation strategies and practices, represent various regions across the country, as well as diverse service populations with a priority on capturing the variation in approaches to implementing Part D–funded programs and in meeting the needs of students in order to ensure rich findings.
In an effort to minimize costs and take advantage of the data quality benefits of using Web surveys,1 the survey will start with a Web-only approach for both the state and local coordinators. State coordinators are accustomed to reporting data electronically, so they will not be offered a paper questionnaire, but because they are anticipated to be more difficult to reach, local coordinators will be mailed a paper questionnaire after the second e-mail reminder. Staff will conduct telephone nonresponse follow-up with both state and local coordinators who do not respond to the electronic solicitations and ask them to complete the survey by telephone at the end of the data collection period.
Exhibit B.3 outlines the different modes and the sequence that will be offered throughout the data collection process.
Exhibit B.4. Data Collection Modes
Materials |
State Coordinator Survey Mode |
Local Coordinator Survey Mode |
|||
Web |
Telephone |
Web |
Mail/Paper |
Telephone |
|
Initial survey invitation with questionnaire |
X |
|
X |
|
|
First reminder |
X |
|
X |
|
|
Second reminder |
X |
|
X |
|
|
Third reminder |
X |
|
|
X |
|
Fourth reminder |
|
|
X |
|
|
Targeted nonresponse follow-up phase |
|
X |
|
|
X |
The study team anticipates using the DatStat Illume software package to program and administer surveys and to track and manage respondents. Illume can accommodate several survey formatting procedures and easily manage the number of cases included in this study. Respondents will be able to access the survey landing page through a specified website and will enter their assigned unique user ID to access the survey questions. Staff will monitor online responses in real time and will enter any completed paper questionnaires into a case management database as they are received. This tracking of completed surveys in the case management database will provide an overall, daily status of the project’s data collection efforts. The study team also will operate a telephone and e-mail Help Desk to assist respondents who are having any difficulties with the survey.
Onsite interviews for the case studies will include state-level staff and staff from six local sites per state, a total of approximately 30 state and local staff per case study state.
Request for Administrative Documents. The study team developed a form/checklist for use in requesting and reviewing of administrative documents at the case study sites. This form can be found in Appendix F. Documents of interest include school planning documents, written plans for institutionwide Part D projects (IWPs), documents that reflect the distribution of N or D funds across the state, tracking systems or forms used to follow students who have transitioned back to regular schools or completed high school, and tracking systems or forms used to assess students’ academic progress and other educational outcomes. The documents requested will be those that are readily available to provide without any preparation from study participants and this will likely be incorporated into the site visit, thus not adding time or effort to the respondent burden.
Feedback from the TWG and other content experts suggested that the start of the case studies data collection begins two months after the start the state coordinator survey. The preliminary survey data will help inform what items on the site visit protocol are of most interest to obtain more in-depth view of the issues (qualitative data). Thus, findings from the State and Local Coordinator Surveys will be used to target on-site data collection on those topics/areas where clarification or confirmation of survey findings is needed in addition to gathering in-depth insights to directly address key evaluation questions.
The data collection for the case studies will begin in February 2017 and will continue through May 2017. The study team will develop the individual site visit schedules in concert with the appropriate staff in the selected SEAs, SAs, school districts, and facilities. The same team members will be responsible for scheduling and conducting the visits, thus developing rapport with staff at each site. The study team members will work with state and local agency and facility contact persons to determine staff to include in the interviews. The types of staff to be included as respondents include program administrators, instructional personnel, counselors, and others who provide educational and transition services to children and youth in correctional and child welfare facilities.
Two members of the study team (a senior and a junior staff member) will conduct a site visit to each of the case study states. Each site visit will last approximately five days, with two days at the state level and three days at the local level, including travel. Each site visit will commence with data collection at the SEAs and SAs.
The study team expects to collect most of the data through individual interviews on site. However, in some instances, such as in facilities with a large number of staff involved with educational and transition services, group interviews that focus on key questions from the appropriate interview protocols may be considered to supplement individual interviews. The team will employ a systematic approach in all data collection interviews, adhering to the standardized protocol questions but supplemented, where appropriate, with relevant probes that arise from each participant’s responses. All interviews will be audio recorded. Visits also will include collection and review of relevant documents and materials such as planning documents, documents that reflect Part D funding distribution, and student tracking systems or forms.
Throughout the data collection process, the study team will employ quality control procedures, including weekly meetings to debrief on-site visits to identify issues with logistics and data collection protocols and make adjustments to data collection as necessary. The team also will maintain a formal tracking system to ensure that data are collected from all necessary respondent groups from each case study site.
Data collection is a complex process that requires careful planning. The team has developed interview protocols and survey instruments that are appropriately tailored to the respondent group and are designed to place as little burden on respondents as possible. The team will use cognitive interviews with Title I, Part D program coordinators to pilot survey data collection instruments to ensure that they are user friendly and easily understandable, all of which increases participants’ willingness to participate in the data collection activities and thus increases response rates.
Recruitment activities will not begin until OMB has approved the data collection. Recruitment materials will include ED’s endorsement of the study. The materials will emphasize the social incentive to respondents by stressing the importance of the data collection to improve implementation of N or D programs nationwide. In addition to carefully wording the recruitment materials, state and local coordinators will be offered varied and sequenced options for completing and submitting the survey because using a mixed-mode approach increases survey response rates. The contractor has, in recent years, achieved response rates greater than 80 percent by carefully sequencing survey modes of administration. Both state and local coordinators will first be offered the option to complete the survey online. For state coordinators who do not respond online, they will then be offered the option of completing the survey by telephone. Local coordinators who do not respond online will be sent a paper version of the questionnaire to complete and return by U.S. mail.
To ensure the quality of the data collection instruments, the study team obtained feedback from several content experts including members of the technical working group (TWG). The feedback helped inform the organization and content of the interview questions. In addition, the study team conducted two interviews for each of the four surveys. After the cognitive testing, the study team revised the instruments. Key revisions included:
All Questionnaires
Instances of “this agency” changed to “your agency” throughout
References to “youth” changed to “children and youth” per statute
References to ESEA replaced with ESSA
SEA Questionnaire
Added question about coordinator’s percentage of time spent working on Title I, Part D
B6: changed “priority” scale to “focus” scale to be consistent with other questionnaires
Added question in section B (after B4) asking about the types of activities LEA coordinators do to support program implementation
C8: clarified process data and outcome data with additional parenthetical examples
SA Questionnaire
Reworded A10 to clarify we are referring to FTEs (including partial FTEs) and added examples of instructional staff to whom the question applies. Clarified that this question refers to all staff in the facility or program.
Reworded A11 to clarify we are referring to FTEs (including partial FTEs) and added examples of support services staff. Clarified that this question refers to all staff in the facility or program.
Under “Institution-wide Programming” in Section C, noted that a skip should be added if respondent answered A2 as anything other than “juvenile corrections”
Added question to address reasons why SAs carry over Title I funds from the previous year
Question about process/outcome data (E8) matches C8 in SEA questionnaire
LEA Questionnaire
Added question in section B (after B3) asking about the types of activities LEA coordinators do to support program implementation
B11: Changed format into grid, to address staff recruitment (direct hiring for Title I-D) vs staff assignment (employees assigned to Title I-D though they may have other projects or responsibilities in the LEA)
Added question to address reasons why LEAs carry over Title I funds from the previous year
Question about process/outcome data (C9) matches C8 in SEA questionnaire
LFP Questionnaire
Added question to address reasons why LFPs carry over Title I funds from the previous year
C12: added an indicator for math and reading
C13: split into two questions to capture both reading and math
AIR is the contractor for the Evaluation of the ESEA Title I, Part D Neglected or Delinquent Programs. The project director is Jennifer Loeffler-Cobia, who is supported by an experienced team of educators leading the major tasks of the project (see Exhibit B.4 for a list of key staff, responsibilities, and contact information).
During data collection and particularly during the initial phase of analysis, the contractors will draw on the cross‑staffing of some key members of the project, including the project director, deputy project director, and team leaders.
Exhibit B.5. Organizations, Individuals Involved in Project
Responsibility |
Organization |
Contact Name |
Telephone Number |
Project Director |
AIR |
Jennifer Loeffler-Cobia |
(202) 403-6668 |
Deputy Project Director |
AIR |
Nicholas Read |
(202) 403-5354 |
Special Advisor |
AIR |
Dr. Kerstin Carlson Le Floch |
(202) 403-5649 |
Special Advisor |
AIR |
Patricia Campie |
(202) 403-5441 |
Special Advisor |
AIR |
Dr. Sandy Eyster |
(202) 403-6149 |
Literature Review and Extant Data Analysis Task Lead |
AIR |
Nicholas Read |
(202) 403-5354 |
Case Studies Task Lead |
James Bell Associates (JBA) |
Dr. Pirkko Ahonen |
(703) 528-3230 |
Survey Task Lead |
AIR |
Kathy Sonnenfeld |
(202) 403-6444 |
1 Some coordinators may work with both N and D programs. For such coordinators, the study team proposes working with the SEA to determine which type of program the coordinator works with more often. In cases where it is not feasible to determine this, the study team will assign the coordinator to the program type that is less common in the population.
1 Couper, M. P. 2000. “Web Surveys: A Review of Issues and Approaches.” Public Opinion Quarterly, 64 (4): 464–494.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Information Technology Group |
File Modified | 0000-00-00 |
File Created | 2021-01-23 |