Eval of Preschool Spec Ed Practices Phase I_Supporting Statement Part B 12-22-14

Eval of Preschool Spec Ed Practices Phase I_Supporting Statement Part B 12-22-14.docx

Evaluation of Preschool Special Education Practices Phase I

OMB: 1850-0916

Document [docx]
Download: docx | pdf

D
RAFT REPORT

Supporting Statement for the Paperwork Reduction Act: Submission for the Evaluation of Preschool Special Education Practices, Phase I – PART B

November 24, 2014


Submitted to:

Institute of Education Sciences

555 New Jersey Ave., NW, Room 502J

Washington, DC 20208

Project Officer: Yumiko Sekino
Contract Number: ED-IES-14-C001

Submitted by:

Mathematica Policy Research
600 Alexander Park, Suite 100

Princeton, NJ 08540
Telephone: (609) 799-3535
Facsimile: (609) 799-0005

Project Director: Cheri Vogel
Reference Number: 40346.052

CONTENTS

SUPPORTING STATEMENT PART B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS 1

B1. Respondent universe and sampling methods 2

B2. Procedures for collection of information 3

B3. Methods to maximize response rates and deal with nonresponse 10

B4. Tests of procedures or methods to be undertaken 11

B5. Individuals consulted on statistical aspects of the design and on collecting and/or analyzing data 12



TABLES

B.1. Precision of district- and state-level point estimates from proposed sample design 11



APPENDICES

Appendix A Mathematica Confidentiality Pledge

Appendix B Advance letter to the Chief State School Officer

Appendix C Advance letter to the State Section 619 coordinator

Appendix D Advance letter to the district special education coordinator

Appendix E State Section 619 Coordinator Survey Instrument

Appendix F District Preschool Special Education Coordinator Survey Instrument

Supporting Statement Part B. Collection Of Information Employing Statistical Methods

The U.S. Department of Education (ED) is requesting Office of Management and Budget (OMB) approval for survey data collection as part of the Evaluation of Preschool Special Education Practices, Phase I. The main objective of the Phase I study is to assess the feasibility of conducting a large-scale randomized controlled trial (RCT) evaluation of one or more curricula or interventions that are used with preschool children with disabilities to promote their learning of language, literacy, social-emotional skills, and/or appropriate behavioral skills for school. The secondary objective of the Phase I study is to provide educators and policymakers with nationally representative descriptive information about current preschool special education programs. If the RCT is deemed feasible and ED decides to exercise the Phase II option, a separate OMB package will be submitted for the RCT.

The feasibility assessment will consider the core features of an evaluation design, including the following:

  • Curricula and/or interventions to be evaluated

  • Study context and participants

  • Key design elements, such as the counterfactual condition, unit of assignment, target minimum detectable effects (MDEs), sample size, and data collection plans

Data to inform the feasibility assessment will be obtained through an evidence review, extant data collection, and surveys of school district preschool special education coordinators and state Section 619 coordinators who administer Individuals with Disabilities Education Act (IDEA) programs at the state level (the surveys are the subject of the current submission). The evidence review will identify promising curricula and interventions for preschool children with disabilities and features about their implementation in schools. Extant and survey data will provide information about preschool special education programs and the curricula and interventions that are available and supported by them to identify potential target districts for a possible RCT evaluation. Specifically, extant and survey data will describe the context in which curricula and interventions are delivered as well as provide information to make decisions about key design elements. Mathematica Policy Research will collect the extant data. Survey respondents will not be asked to provide or confirm the data collected by Mathematica.

The study’s overarching research question is whether there are promising curricula and interventions for preschool children with disabilities for which a large-scale effectiveness trial would be feasible and add value to the field. The survey data collection will address the following ten specific research questions that represent critical information gaps for making a feasibility assessment (that is, needed information that cannot be obtained through either the evidence review or extant data):

  1. What curricula and interventions are available and supported for use with preschool children with disabilities to promote learning of language, literacy, social-emotional skills, and/or appropriate behavioral skills for school?

  2. How are decisions to adopt curricula and interventions made?

  3. What agencies, programs, and settings serve preschool children with disabilities?

  4. What is the structure of programs that serve preschool children with disabilities?

  5. What resources support providing services to preschool children with disabilities?

  6. What are the characteristics of staff who deliver services to preschool children with disabilities?

  7. What are turnover rates for staff who deliver services to preschool children with disabilities?

  8. What are eligibility rules for preschool special education curricula and interventions?

  9. What are the enrollment characteristics of preschool children with disabilities and classrooms that include these children?

  10. What curricula and interventions for children ages 3 to 5 with disabilities might be suitable for study in a large-scale RCT?

Preschool special education coordinators in school districts and state Section 619 coordinators will provide information to address all but the last question, which will be addressed through the evidence review. The district survey will be administered in a nationally representative sample of 1,200 school districts serving preschool children with disabilities. It will be administered as a 60-minute web survey. The state survey will be administered in all 50 states and the District of Columbia as a 30-minute editable PDF survey. Data collection for each survey will begin in April 2015.

Information obtained as part of the data collection for Phase I will be used to develop a publicly available report for a wide audience of policymakers and educators. If the Institute of Education Sciences (IES) decides to sponsor an RCT following the feasibility assessment, the project team will conduct recruitment and random assignment and will work with curriculum/intervention developers to train treatment group teachers. Mathematica will submit a separate OMB package requesting clearance for data collection activities for the impact study, which would be completed under Phase II of the Evaluation of Preschool Special Education Practices.

B1. Respondent universe and sampling methods

Phase I of the Evaluation of Preschool Special Education Practices serves two main purposes: (1) assessing the feasibility of conducting an RCT evaluation of a curriculum or intervention for preschool children with disabilities in Phase II and (2) providing nationally representative estimates of current preschool special education programs. To collect data for Phase I, we will survey Section 619 coordinators in all states plus the District of Columbia and a national probability sample of district preschool special education coordinators. We will sample 1,200 school districts (or equivalent administrative units governing special education in an area) from among all public school districts in the United States serving children ages 3 to 5 with disabilities under Part B of IDEA. All public school districts in the United States (50 states plus the District of Columbia) will be eligible for selection if they serve at least 10 preschool children with disabilities, including charter schools. We will use as the sampling frame the EDFacts district-level file from 2012–13, which contains counts of children ages 3 to 5 with disabilities under Part B of IDEA. Additional information from the Common Core of Data (from the National Center for Education Statistics), such as urbanicity and percentage of children in the district eligible to receive free and reduced-priced meals, will be merged onto the EDFacts data. We will select the districts using a stratified random sample, described in more detail in Section 2.

B2. Procedures for collection of information

a. Statistical methods for stratification and sample selection

State sample. The state sample will include the Section 619 coordinators in all 50 states and the District of Columbia. Our study design assumes a very high response rate for this survey, and we do not plan to construct weights to account for any state-level nonresponse that may occur.

District sample. We will select the 1,200 districts from the EDFacts district-level file using a stratified random sampling approach to get information from approximately 1,020 district coordinators (assuming an 85 percent response rate). Before sampling, we will exclude from the EDFacts file any districts not serving at least 10 preschool children (ages 3 to 5) with disabilities served under Part B of IDEA (we expect this to be approximately 4,800 districts or 37 percent of all districts, but serving only 3 percent of all preschool children with disabilities). We will use sampling strata to control the distribution of characteristics in the sample with respect to the district’s number of preschool children with disabilities, categorized into three groups. We plan to undersample districts with the fewest number of such children. This undersampling will allow us to make national estimates that include such districts (using appropriate weights), while reserving more resources to collect data from districts that are more likely to be candidates for the possible RCT in Phase II. We also plan to explicitly stratify by district location (census region) and sociodemographics (percentage of racial/ethnic minorities) and implicitly stratify by another indicator of location (level of urbanicity) and poverty (categorized percentage of children in district eligible to receive free or reduced-price meals).

Sampling and weighting approach. We assume that we will first divide all eligible districts into three strata based on the proportion of all preschool children with disabilities represented. The first stratum will contain the 215 districts that have the highest numbers of preschool children with disabilities, and comprising 30 percent of all such children in the U.S.; the second stratum will contain the 957 districts with the next-highest number of preschool children with disabilities, comprising another 30 percent; and the third stratum will contain the remaining 7,141 districts. We currently plan to select 100 percent of the districts in the first stratum; 50 percent of the districts in the second stratum; and about 7 percent of the districts in the third.

Within the district-level size categories based on the number of preschool children with disabilities, the allocation across census region and racial/ethnic categories will be proportional to the number of districts in the corresponding population strata. Urbanicity and poverty would be used for implicit stratification, in which the sample frame is sorted by one or more variables before sampling to help make the distribution of the sample look like the frame with respect to those characteristics (although it does not ensure this). We will use the sequential sampling technique in SAS to select the sample of districts. After data collection, we will construct analysis weights that account for the probability of selection as well as adjust for differential patterns of nonresponse.

In addition to district-level weights, we will produce corresponding child-level weights, in which the nonresponse-adjusted district weight is multiplied by its number of preschool children with disabilities. The district weights can be used to make estimates such as, “X percent of districts use the Y approach.” The child-level weight can be used to make estimates such as, “Z percent of preschool children with disabilities are in districts that use the Y approach.” A similar child-level weight can be constructed for responses to the state-level survey, which can be used to make estimates such as, “Z percent of preschool children with disabilities are in states that have a Y policy,” compared to the unweighted state-level estimates such as, “X percent of states have a Y policy.”

b. Notification of sample members

Introduce the study to state education agencies. We will begin by sending the chief state school officer a notification letter (see Appendix B) explaining the study, the importance of the state’s involvement, and participation requirements. States and districts have an obligation to participate in ED evaluations (Education Department General Administrative Regulations (EDGAR) (34 C.F.R. § 76.591)). We will also send the state Section 619 coordinator an advance letter explaining the study (see Appendix C). We will email the Section 619 coordinator an editable PDF of the state instrument along with a password to access the encrypted survey and instructions on how to complete and submit their survey (draft surveys are included in Appendices E and F).

Project staff will monitor completion rates, review the instruments for completeness throughout the field period, and follow up by email and telephone as needed to answer questions, assist with any technical issues, and encourage completion. The instrument will, on average, require 30 minutes to complete. Researchers knowledgeable about the content area will review the completed questionnaire and documentation downloaded from state and other extant data sources for completeness. We will then conduct follow-up calls with states to clarify any ambiguous answers in the survey and respond to their questions. All state noncompleters will also receive a follow-up call to give them the option of completing the survey over the phone.

Introduce the study to district preschool special education coordinators. We will send notification letters (see Appendix D) by email and mail to sampled districts. Notification letters will be on ED letterhead, informing them of the study’s importance and benefits and their responsibility to respond. Sending both email and mail will increase the likelihood that addressees will receive our communications in a timely manner. All district noncompleters will also receive a follow-up call to offer assistance with the web survey and to give them the option of completing the survey over the phone.

c. Who will collect the data and how it will be done

Administer surveys. We will determine which districts in the sample require research applications to be completed for district staff participation. We have extensive experience in completing district-required research applications. We will check district websites to determine if they require an application to conduct research. We will send the district preschool special education coordinator emails once we obtain district approval to begin data collection. Mailed letters will include the survey URL and login information for district preschool special education coordinators responding to the survey as a web-based instrument. The Section 619 coordinators will be emailed an encrypted electronic editable PDF.

All communications will include a toll-free study number and a study email address for respondents’ questions and technical support. We will assign several trained research staff to answer the study hotline and reply to emails in the study mailbox. We will train them on the purpose of the study, the obligations of respondents to participate in the evaluation, and the details for completing the survey. Content questions will be referred to the study leadership. An internal FAQ document will be developed and updated as needed throughout the course of data collection to ensure that the research staff have the most current information on the study.

We have found that state and district personnel prefer completing surveys either through an electronic format or a web-based approach. The encrypted electronic editable PDF will be our primary method of data collection for the Section 619 coordinators. The web will be our primary method of data collection for the district preschool special education coordinator. Mathematica will use our survey management system to track the sample for each instrument, record the status of district approvals, generate materials for mailings, and monitor survey response rates. In the event that a state or district does not respond to the survey within four weeks of the survey launch, we will begin telephone calls. In those calls, we will first remind respondents about completing the self-administered instrument and determine if assistance is needed. We will offer to complete the survey with the respondent by telephone, entering the information into either an editable PDF or into the web survey as appropriate. The brevity of both state and district surveys makes a telephone completion of the entire survey feasible.

d. Estimation procedures

Data collected from the study’s surveys and information gathered from the evidence review and extant sources will primarily be used to assess the feasibility of conducting an RCT. Because the survey data we plan to collect is likely to be of broader interest to the field, we will also produce a descriptive report on the findings from the surveys. We discuss our estimation procedures for both of these purposes in turn.

1. Estimation procedures for assessing RCT feasibility

To assess the feasibility of conducting an RCT, we will develop an evaluation design report that describes the core features of three potential design options. These features include the characteristics of candidate interventions; study context and participants; counterfactual condition; key design elements, such as the unit of assignment, target minimum detectable effects (MDEs), sample size, and types of data collection; and training. Below, we describe each of these core features and list types of key decisions that our design report will address. We describe the estimation procedures we will use with extant data and data collected from state questionnaires and district surveys. We also describe how information from the evidence review will be used.

Candidate interventions. Identifying the intervention is related to other design issues, such as (1) substantive focus in terms of outcomes and disabilities; (2) the study settings; and (3) implementation intensity, duration, and cost.

  • Decisions for evaluation options: How narrowly or broadly defined the intervention is in intended outcomes and disabilities; whether to prioritize interventions that are more widely used or ones with more evidence of effectiveness from smaller-scale studies; how difficult it would be for schools to implement the intervention after the evaluation ends.

  • Estimation procedures using extant data, state questionnaires, and/or district surveys: We will use descriptive statistics (means and sample proportions) to describe how children are being served in agencies, settings, and program structures; implementation prevalence of types of interventions, resources available to districts to support intervention costs, and how districts currently support implementation of similar interventions.

  • How evidence review informs decisions: Identifies replicable interventions with promise for improving outcome measures in the domains of interest, indicates settings where it has been implemented, training and implementation requirements, and costs.

Study context and participants. An evaluation must identify the implementation setting and the target population of programs, schools, teachers, and children to be studied.

  • Decisions for evaluation options: Setting(s) in which the intervention should be implemented (for example, self-contained and/or integrated classrooms in public schools and/or community-based programs); whether regular classroom teacher or other staff (such as a special education teacher, paraprofessional, or specialist) should implement intervention; whether to include all preschool children in a classroom or only children with certain disabilities; whether to include the entire 3- to 5-year-old age range or focus on a particular level (such as prekindergarten); whether to target children actually identified for special education or establish a high-risk designation in the study using a baseline assessment1 and also include those children in the study.

  • Estimation procedures using extant data, state questionnaires, and/or district surveys. We will use descriptive statistics (means and sample proportions) to describe how children are being served in programs and schools; size of preschool programs; qualifications of preschool program staff; variation in special education eligibility requirements.

  • How evidence review informs decisions, Describes implementation settings and characteristics of the population the intervention is intended to serve.

Counterfactual condition. Impacts are interpreted relative to a counterfactual condition, making the characteristics of the counterfactual important for ensuring a meaningful contrast.

  • Decisions for evaluation options. Whether to compare a single intervention with training and support to either a business-as-usual set of practices that does not include training and support or to a specific alternative intervention. If selecting several interventions, should they focus on outcome measures in the same or different domains?

  • Estimation procedures using extant data, state questionnaires, and/or district surveys. We will use descriptive statistics (means and sample proportions) to describe the prevalence of types of interventions, which will inform the types of interventions used widely, and districts where the counterfactual will be most distinct.

  • How evidence review informs decisions. Identifies whether interventions are associated with outcomes in one or more domains; provides detailed information regarding the conceptual model underlying the intervention, which is necessary for understanding what counterfactual condition provides a meaningful contrast to the intervention.

Unit of assignment. The unit of assignment is the level at which randomization is conducted, and units are assigned to either the treatment group or the comparison group. The unit of assignment must be logically aligned with the setting in which the intervention is implemented. The unit of assignment can have implications for whether the control group can learn about, and possibly implement aspects of the intervention, which would contaminate impact estimates.

  • Decisions for evaluation options. Whether to conduct randomization at the level of the school, classroom, staff, or student.

  • Estimation procedures using extant data, state questionnaires, and/or district surveys. We will use descriptive statistics (means and sample proportions) to describe the settings in which interventions seem to be commonly implemented. For example, schools could be the units if interventions are implemented by teachers connected to a single school. Staff could be the unit if interventions are implemented by itinerant staff traveling among schools providing direct services (for example, a speech and language therapist providing individual or small-group interventions).

  • How evidence review informs decisions. Identifies the setting in which the intervention is intended to be implemented.

Target MDE and size of the study sample. The appropriate MDE for an evaluation depends on the intervention and outcome of interest. The target MDE may be larger for interventions directly aligned with an outcome measure that is reliable and sensitive to change during the preschool year. Evaluations of interventions that are more expensive or more intensive to implement also may target larger MDEs to justify the higher cost. The target MDE has implications for the study sample because, in general, larger samples are needed to achieve smaller MDEs.

  • Decisions for evaluation options: Whether to target a smaller MDE, knowing not only that evaluation costs are likely to be higher because larger samples are required, but also that the smaller change in the specified outcome is meaningful and predictive of positive outcomes; which types of outcomes to consider, knowing that relatively more costly outcomes (such as direct child assessments) tend to have less error and be more stable.

  • Estimation procedures using extant data, state questionnaires, and/or district surveys: We will use descriptive statistics (means and sample proportions) to describe the factors needed to select samples to achieve a target MDE, such as the number of schools per district, classrooms per school, and preschool children per classroom; and the number, age, and distribution of preschool children with disabilities in a district with different disability categories.

  • How evidence review informs decisions: Describes intervention effects found in previous studies and for which outcome measures; allows examination of how meaningful previously reported effects would be for policy.

Recruiting and data collection. For an option to be feasible, it must be possible to recruit districts, schools, teachers, and parents/children, and collect the necessary data for the evaluation.

  • Decisions for evaluation options: Balancing the amount of data collection with study objectives and costs; whether to have one or two years of implementation and/or follow-up; whether a two-year option should measure maintenance of effects for children or changes in the quality of implementation (fade-out versus improvements) with a year of experience.

  • Estimation procedures using extant data, state questionnaires, and/or district surveys: We will use descriptive statistics (means and sample proportions) to identify districts that meet study eligibility requirements and may be willing to participate.

  • How evidence review informs decisions: Indicates what kinds of outcome measures are sensitive to expected changes in response to the intervention and the length of implementation associated with changes.

Training and implementation. Training and professional development are necessary to implement an intervention as faithfully as possible. Feasible evaluation options must identify how training will be provided to the requisite sample, whether existing training materials are sufficient, and whether piloting of required training should be conducted before large-scale implementation.

  • Decisions for evaluation options: Whether to pilot the training; how to balance the intensity of training with cost; whether to conduct trainings at individual schools or centrally; whether the intervention developer should conduct trainings or if a train-the-trainer model is feasible.

  • Estimation procedures using extant data, state questionnaires, and/or district surveys: We will use descriptive statistics (means and sample proportions) to describe qualifications of staff delivering preschool services; whether staff turnover during the evaluation is likely to be a concern.

  • How evidence review informs decisions: Identifies whether training materials and replicable implementation procedures exist; training costs; qualifications of staff who have implemented the intervention previously; previous documentation on whether the intervention has been implemented with fidelity.

2. Estimation procedures for descriptive report of survey findings

In addition to informing the feasibility of conducting an RCT, the data collected by this study will be presented in a descriptive report. Descriptive findings will consist of means, proportions, and standard errors or confidence intervals that take into account the sampling design (for example, some findings may need to be weighted to provide nationally representative information). The report will include descriptions of the following:

  • Prevalence of curricula and interventions that are available and supported for use with preschool children with disabilities to promote learning of language, literacy, and/or social-emotional skills/behavior appropriate for school. These are the curricula and interventions that districts make available to teaching staff and support by providing resources such as training and materials.

  • How decisions to adopt curricula and interventions are made. We will report the extent to which states have approved lists of interventions and curricula, who decides which interventions and curricula to make available and support, and how much freedom preschool teachers have in selecting which interventions and curricula to use.

  • Agencies, programs, and settings that serve preschool children with disabilities. We will report which agencies (for example, school districts or community-based providers) deliver curricula and interventions, the number of children served in different programs (for example the school district’s preschool program, Head Start centers), and the number of children served in different settings (for example, home, inclusive classroom, self-contained classroom).

  • Structure of programs that serve preschool children with disabilities. We will report the extent to which children receive services from classroom teachers versus specialists, how services are delivered (for example, small groups, individual pullout), the extent to which children are in full or partial inclusion, and the length of the preschool program day.

  • Resources that support implementation of curricula and interventions. We will report what training and professional development support is provided to teachers, what support the district provides for the purchase of curriculum/intervention materials; and whether funding is available for implementing promising new curricula or interventions.

  • Characteristics of staff that deliver services to preschool children with disabilities. We will report the number and qualifications of full-time equivalent staff working with preschool children with disabilities; the number of full-time equivalent early childhood teachers delivering services to children with disabilities, what qualifications are required to teach preschool children with disabilities, turnover rates, and whether staff are unionized.

  • Enrollment in and eligibility for preschool special education services. We will report how states define eligibility for preschool special education programs, how many preschool children with disabilities are served overall, and how many children are served in school-based programs.

e. Degree of accuracy needed for the purpose described in the justification

Our study design assumes that all (or nearly all) state coordinators and 85 percent of sampled district coordinators will respond to the survey. This would result in 51 responding state coordinators and 1,020 responding district coordinators. Should either survey result in a response rate lower than 80 percent, we will conduct a nonresponse bias analysis to investigate patterns of nonresponse that may result in biased estimates, and whether such bias appears to have been mitigated by the weighting adjustments. This would be done by examining key characteristics (such as district size, number of children with special needs, and others that we will know about all districts) and comparing their distributions for responding and nonresponding district coordinators.

f. Unusual problems requiring specialized sampling procedures

There are no unusual problems that require specialized sampling procedures.

g. Use of periodic (less frequent than annual) data collection cycles to reduce burden

These data will only be collected once, during the spring of the 2014–15 school year.

B3. Methods to maximize response rates and deal with nonresponse

a. Methods to maximize response rates

We will work with states and school districts to explain the importance of this data collection effort and to make it as easy as possible to comply. For all respondents, a clear description of the study design, the nature and importance of the study, and the OMB clearance information will be provided.

For the states, the data collection’s reliance whenever possible on administrative and extant data, thereby limiting the data collection from state representatives, will encourage cooperation with evaluation efforts. One of the study’s technical work group members is a Section 619 coordinator, and she has offered to provide information about the surveys in upcoming meetings to encourage both completion at the state level and encouragement for district completion. We will provide all state Section 619 coordinators with a list of sampled districts in their state to enlist the aid of the state coordinators to maximize district response rates.

We will initiate several forms of follow-up contacts with state or district-level respondents who have not responded to the survey. We will use a combination of reminder postcards, emails, and follow-up letters to encourage respondents to complete the surveys. The project management system developed for this study will be the primary tool for monitoring whether surveys have been initiated. After seven days, we will send an email message to all nonrespondents indicating that we have not received a completed survey and encouraging them to submit one soon. A phone survey option will be offered to respondents as part of the nonresponse follow-up effort. For either state or district nonrespondents, after four weeks we will call respondents and determine if assistance is needed to complete the web or editable PDF instrument. As part of that call, or future follow-up calls, we will offer to complete the survey on the telephone. For the state respondent, that would mean filling out the editable PDF for that person, and for the district, it would mean logging in to complete the web survey. We will have a flag set for any web surveys completed in this manner.

To maximize response rates, we will also (1) provide clear instructions and user-friendly materials, (2) offer technical assistance for survey respondents using a toll-free telephone number or email, (3) monitor progress regularly, and (4) for district preschool special education coordinators, we will contact the Section 619 coordinator in their state to speak to the district respondent about the importance of completing the survey.

b. Weighting the district sample

Because of the differential sampling rates for districts based on their number of preschool children with disabilities, a “design effect” is introduced that may increase the variance of national estimates from the district survey beyond that of a simple random sample of the same size. When making district survey estimates using the child-level weights, an additional unequal-weighting design effect is introduced (the two design effects are essentially multiplicative).2 For state survey estimates using the child-level weights, only the latter design effect is a factor.

Table B.1 shows 95 percent confidence interval half-widths for point estimates using the proposed design. The table includes confidence interval half-widths for both the overall sample and for various subgroup sizes, should any subgroups of interest be identified during the analysis of the data. All district subgroups are assumed to span the three size categories used for differential sampling. The study is descriptive in nature so power calculations or minimum detectable differences are not relevant. Nor do we present confidence intervals for estimates made using student-level weights, as the expected design effect for this weight is unknown at this time.

Table B.1. Precision of district- and state-level point estimates from proposed sample design

Survey type

Subgroup proportion

Number sampled

Number of respondents

Effective sample sizea

Half-width of 95 percent confidence interval

Outcome of p = .50

Outcome with
s.d. = 1

District

1.00

1200

1020

520

.043

.086


0.75

900

765

390

.050

.099


0.50

600

510

260

.061

.122


0.25

300

255

130

.086

.172

State

1.00

51

51

51

.139

.274

aThe effective sample size is the number of respondents divided by the design effect. We assume a design effect of 1.784 for the district sample due to differential sampling rates and an additional design effect of 1.1 due to nonresponse weighting adjustments.


When making national estimates for district-level responses, the confidence interval is plus or minus .086 standard deviations. If the outcome is a proportion of around .50, the confidence interval is plus or minus .043; that is, we would be 95 percent confident that the true value lies between .457 and .543 with the current design. For a 25 percent subgroup of districts, the 95 percent confidence interval would be plus or minus .172 standard deviations, or .086 around a proportional outcome of around .50. When making national estimates for state-level responses, the confidence interval is plus or minus .274 standard deviations, or .139 around a proportional outcome of around .50.

B4. Tests of procedures or methods to be undertaken

The state Section 619 coordinator survey will be pre-tested with up to three states, and prior to the pre-test, we will gather documentation and extant data from state websites to better understand how current the information obtained from secondary sources is. The preschool special education coordinator survey will be pre-tested in Fall/Winter 2014 with nine or fewer respondents. The results of the pretest will be used to make revisions to the instruments prior to full-scale survey administration.

B5. Individuals consulted on statistical aspects of the design and on collecting and/or analyzing data

The study is being conducted by Mathematica Policy Research for the Institute of Education Sciences (IES), U.S. Department of Education. With IES oversight, the contractor for the evaluation is responsible for study design, instrument development, data collection, analysis, and report preparation.


The individuals listed below worked closely in developing the survey instruments and will have primary responsibility for the data collection and analysis. Contact information for these individuals (including content experts serving as consultants to Mathematica) is provided below.


Cheri Vogel, Ph.D.

Project director

cvogel@mathematica-mpr.com

(609) 716-4546

John Deke, Ph.D.

Co-principal investigator

jdeke@mathematica-mpr.com

(609) 275-2230

Samuel Odom, Ph.D.

Co-principal investigator

slodom@unc.edu

(919) 966-4250

Patricia Snyder, Ph.D.

Co-principal investigator

patriciasnyder@coe.ufl.edu

(352) 273-4291

Stephen Lipscomb, Ph.D.

Deputy project director

Slipscomb@mathematica-mpr.com

(617) 674-8371

Laura Kalb, B.A.

Survey director

Lkalb@mathematica-mpr.com

(617) 301-8989

Barbara Carlson, M.A.

Statistician

bcarlson@mathematica-mrp.com

(617) 674-8372

Tim Bruursema, B.A.

Deputy survey director

tbruursema@mathematica-mpr.com

(202) 484-3097










www.mathematica-mpr.com

Improving public well-being by conducting high quality,
objective research and data collection

Princeton, NJ Ann Arbor, MI Cambridge, MA Chicago, IL Oakland, CA Washington, DC

Mathematica® is a registered trademark
of Mathematica Policy Research, Inc.



1 For example, a designation could be 1.5 standard deviations below the mean on the outcome of interest.

2 Because states are selected with certainty and districts are selected with equal probability within stratum, using child-level weights to make estimates from either survey will introduce a design effect due to unequal weighting. However, for the district survey, this effect will be reduced to some extent by undersampling districts in the stratum with the fewest number of preschool children with disabilities, and oversampling those with the largest number.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Title40346 052 OMB Part B (07.15.14)
AuthorTIMOTHY BRUURSEMA
File Modified0000-00-00
File Created2021-01-26

© 2024 OMB.report | Privacy Policy