AN EXPERIMENTAL STUDY OF
THE PROJECT CRISS READING PROGRAM ON
NINTH GRADE READING ACHIEVEMENT
IN RURAL HIGH SCHOOLS
PAPERWORK REDUCTION ACT
CLEARANCE REQUEST
SECTION B
Prepared For:
Institute of Education Science
United Stated Department of Education
Contract No. ED-06-CO-0016
Prepared By:
Northwest Regional Educational Laboratory
Center for School and District Improvement
Second Revised submission
October 11, 2007
Table of Contents
List of Exhibits ……………………………………………………………………………. ii |
|||
|
|||
SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSION |
|||
|
|
|
|
B. |
DESCRIPTION OF STATISTICAL METHODS |
|
|
|
1. |
Respondent Universe and Sampling Methods …………………………………. |
1 |
|
2. |
Procedures for Collection of Information Statistical Methodology for Sample Selection ……………………………... Data Collection Plans ………………………………………………………. Estimation Procedures ……………………………………………………… Statistical Power Estimates ………………………………………………… Quality Control Procedures…………………………………………………. Unusual Problems Requiring Specialized Sampling ………………………. Use of Periodic Data Collection Cycles to Reduce Burden ………………... |
2 2 5 7 8 8 8 |
|
3. |
Methods to Maximize
Response Rates and to Deal With Issues |
8 |
|
4. |
Pilot Testing of Instruments ……………………………………………………. |
9 |
|
5. |
Contractor Name Responsible
for Design, Analysis, and Data |
9 |
|
|
|
|
LIST OF APPENDICES ………………………………………………………………….. |
11 |
List of Exhibits
1. Estimate of Respondent Universe, Sample, Response Rate……………………………. |
1 |
2. Data Sources for Assessing Project CRISS Fidelity …………………………………... |
3 |
3. Power Analysis ……………………………………………………………………….. |
8 |
B. DESCRIPTION OF STATISTICAL METHODS
Respondent Universe and Sampling Methods
The population of interest for this study is students in rural high schools in the Northwestern states. The respondent universe is ninth-grade students in rural high schools in four Northwestern states—Montana, Idaho, Oregon, and Washington. School size is an eligibility criteria. Schools with 250 or more students will be included in order to achieve adequate within-school student sample size, and because Project CRISS is designed for teachers to work together in subject-based groups of several or more teachers. Schools with fewer than 250 students may not have a sufficient number of teachers to fully benefit from Project CRISS collaborative activities.
Rurality was determined using locale and sub-locale codes and isolation indices developed by the National Center of Education Statistics (NCES).1 The schools included in the study universe consist of those located in rural, non-suburban areas (i.e., rural fringe, rural distant, and rural remote in relation population centers using the NCES categories) or in small towns that are distant or remote from large population centers. Applying these criteria of school size and rurality yields a total of 195 comprehensive grade 9-12 high schools in the four states.
From a total of 195 eligible high schools, 66 (34 percent) will be recruited to participate in the study. All ninth-grade content teachers, school principals, and ninth-grade students in the volunteer schools will be included in the study. The sample of 66 (33 treatment and 33 control schools) represents 10 percent over sampling for a 30/30 study with the realization that some schools may drop out of the study before data collection is completed. A final sample of 30 experimental and 30 control schools is desired to achieve adequate statistical power, as discussed later under Statistical Power Estimates. Exhibit 1 shows the estimates of respondents and expected response rate.
Exhibit 1. Estimate of Respondent Universe, Sample, and Response Rate
Respondent Group
|
N of Cases Respondent Universe
|
N of Cases Sampled |
Expected Response Rate |
Students–9th grade
|
7,260 |
7,260 |
80% |
Teachers–9th grade
|
396 |
396 |
85% |
Principals
|
66 |
66 |
90% |
CRISS Local Facilitator
|
66 |
66 |
95% |
The SDRT-4 Comprehension test will be administered to all ninth grade students. Testing all students allows administration to be conducted within a regular ninth-grade class rather than randomly selecting students and pulling them out for testing. Testing all students during regular classes (e.g., a homeroom class) is less disruptive to the regular school schedule and also ensures an adequate within-school sample, especially for the small schools that have about 50 ninth graders to begin with. We expect some students and parents will opt out of testing with passive parental consent, but expect a response rate of 80 percent given that the testing is performed within the regular school program.
We will also sample all ninth-grade teachers, primarily because this is also a small group within most of the schools, and given that the questionnaire is short and easy to complete. Likewise, we must sample all principals. We expect response rates for these groups to be 85–90 percent given that the questionnaire is short and asks for straightforward information. There is also only one Local Facilitator (LF) per school who will be asked to complete the monthly log. We will provide assistance to the LFs, if necessary, including completing the form for them via telephone if they are unable to use the on-line monthly form. This assistance will help us achieve a high response rate for this critical information, estimated at 95 percent. The findings from the study will be representative of the ninth-grade students, ninth-grade teachers, principals, and local facilitators in the schools that volunteer for the study.
Procedures for Collection of Information
Statistical Methodology For Sample Selection
As described in the previous section, the study will not involve statistical sampling of schools. All of the eligible 195 rural high schools from the four Northwestern states will be contacted. Schools will be presented with information about Project CRISS and the study conditions and asked if they want to participate in the study. We will ask schools if they have been trained in or presently use Project CRISS and will eliminate active Project CRISS schools or those which received CRISS teacher training within the past five years. Our final study sample of 66 will represent the schools that are willing to participate in a randomized experiment of Project CRISS. The issue of volunteer bias is irrelevant to this study, however, as we are evaluating the effect of Project CRISS in the condition of voluntary adoption. In other words, we are evaluating the effect of Project CRISS as a volunteer program for high schools rather than a state mandate. This is how Project CRISS operates in states throughout the nation.
Within schools, there will be no statistical sampling of ninth-grade students, ninth-grade teachers, school principals, and CRISS Local Facilitators, as discussed in the previous section. For the classroom observation data, 10 treatment schools and 10 control schools will be randomly selected for classroom observation. Out of practicality and to reduce travel costs, we are conducting the labor-intensive observations on a one-third random sample of the treatment and control schools.
Data Collection Plans
Student Outcome Data. The outcome measure for ninth-grade students is the Stanford Diagnostic Reading Test, 4th edition (SDRT-4) has been selected as the standardized measure to assess student reading comprehension. The Comprehension subtest will be administered to ninth-grade students in treatment and control schools during the fall and spring of the same school year (2008-09).
The SDRT-4 is a group administered norm and criterion referenced reading assessment designed to pinpoint student strengths and weakness in several aspects of reading. Four skill domains are sampled but not at all levels of the test. These include: phonetic analysis, vocabulary, comprehension, and scanning. Only the Comprehension subtest of the SDRT-4 will be administered to reduce burden and because this is the primary hypothesized student effect of Project CRISS.
The SDRT-4 is a nationally-normed test. A stratified random sampling technique was used to select the standardizations sample. The test was standardized on 60,000 students in fall 1994 and spring 1995. Internal consistency and alternate form reliability data are reported. Except for the vocabulary subtest which is .79 all internal consistency coefficients exceed .80. Alternate form reliabilities range from .62–.88. Data are reported on content and criterion related validity. Test items were reviewed by content experts and the item tryout standardization involved 150 districts in 32 states for a total of 16,000 students. Criterion related validity was established by correlating the SDRT-4 with a previous test, SDRT-3.
The ninth-grade cohort entering high school during this second year of teacher implementation will be administered the SDRT-4 Comprehension subtest in fall 2008 and spring 2009 (pre/post) to assess program effects. While this is only one year of exposure for students, students will be exposed to the reading comprehension strategies in multiple content classes (reading/language arts, social studies, science, mathematics) thereby increasing the effective “dosage” of the program intervention. The second year of Project CRISS implementation is viewed as the best time to reliably estimate the true effects of the program because teachers will have developed skills in using Project CRISS and will have taken steps to integrate the instructional methods into the curriculum. They will have had a full first year to work through typical implementation problems of any new teacher instructional program.
Program Implementation Data. Exhibit 2 summarizes the main features of the intervention along with data sources that will be used to document each project component in the treatment schools. Appendix A provides a detailed schedule of Project CRISS services that will serve as the a priori blueprint for measuring fidelity.
Exhibit 2. Data Sources and Collection for Assessing Project CRISS Fidelity
Element of CRISS Intervention
|
Data Sources and Collection (Treatment Schools) |
Teacher training and technical assistance visits by a national certified CRISS trainer (required by Project CRISS)
|
Documents of training events and attendance such as sign-up sheets, and training and implementation schedules; teacher questionnaire to confirm ninth-grade teacher participation
|
Monthly meetings, classroom walk-throughs, and classroom observations of coaching held by LF (required by Project CRISS)
|
LF Log including frequency and description of LF-initiated meetings and coaching for teachers. Teacher questionnaires to confirm ninth-grade teacher participation
|
LF advanced training and certification (highly recommended by Project CRISS) |
Documentation of LF observation of an additional Level I training, attendance at Level II training, and training under observation, to become a certified district trainer
|
Administrator walk-throughs and attendance at Project CRISS trainings (highly recommended by Project CRISS)
|
Principal questionnaire |
Classroom Observations. Data will be collected from both treatment and control schools to describe how instruction differs across the two groups using an established observational instrument sensitive to the learning principles underlying Project CRISS. The Vermont Classroom Observation Tool will be used and adapted for Project CRISS with assistance from a developer consultant. VCOT-CRISS observations will be conducted in 10 treatment and 10 control schools. In each school a trained observer will spend three days in a single site visit conducting observations across content classes. With approximately five classroom observations per day, the observer can accomplish about 15 observations per visit. Several teachers will be selected for multiple observations. This process will be repeated three times: (1) fall 2007 baseline measure as teachers in the treatment schools are just learning Project CRISS, (2) fall 2008, and (3) spring 2009 near the end of second year of CRISS implementation by which time teacher practices should show noticeable change if Project CRISS strategies are taking hold. Across the treatment schools, there will be approximately 150 observation points (10 schools X 15 observations per visit) and likewise 150 observation points for the control schools for each of the three time periods over the two years of CRISS implementation.
Observations are labor intensive and expensive, yet provide the best evidence of teacher practices that connect to Project CRISS. Classroom practices are highly complex behaviors and variable from day-to-day, subject-to-subject, and during the course of the school year. Teachers will try things, adjust, and change their curriculum and pedagogy during the school year depending on how students respond. Thus, teachers need to be observed on multiple occasions to provide an accurate picture of teacher practice. Our purpose for including observations is to obtain a valid and reliable picture of teacher practices over time that are related to CRISS philosophy and principles. We have no intention of trying to statistically estimate teacher effects, which is beyond the scope of this study, but rather describe in fuller detail if CRISS principles are evident in teacher practice in treatment schools compared to control schools.
Observation days will need to be scheduled ahead of time with teachers to avoid substitute teachers, testing days, etc. Teachers will be told to conduct their regular planned lessons during observation periods. A representative sample of classrooms at each study school will be used for observation. At this point, we do not know the feasibility of statistical sampling of classrooms within school. This is because schools are understandably resistant to the idea of outside observers dropping in to classrooms randomly. The most likely scenario is that a final sample of classrooms at each study school will be negotiated based on practical concerns while keeping in mind the need for the representativeness of the sample. Observed teachers may be asked some unstructured questions by the observer to place the observed lesson the larger context of the teacher’s instructional approach and gather any other pertinent information to include in observers notes.
Background information on VCOT-CRISS is included in Appendix B. The VCOT-CRISS protocol and scoring rubric is included in Appendix C.
Questionnaire Data. A key feature of Project CRISS is identification and advanced training for a Local Facilitator (LF) from the school or district in an effort to build local capacity and ongoing support for teachers in between the training and technical assistance visits provided by a national certified CRISS trainer. Because the CRISS-designated LF is crucial to the follow-up support after the initial training and through the second year, we have developed and pilot tested a log of CRISS related activities for LFs to complete. This will be a straightforward activity log to document contacts with teachers such as weekly meetings, peer observation or coaching, and other activities that support teachers as they attempt to implement CRISS strategies to their content areas. In order to clarify incomplete logs or missing data, we will also contact LFs by telephone as needed. The LF log is presented in Appendix D.
A brief teacher questionnaire will be used to measure teacher experience and qualifications in both the treatment and control schools. Teacher experience and skill are very important predictors of student outcomes. While randomization should ensure balanced treatment and control groups along basic teacher quality characteristics (such as years teaching, degrees held, etc.), this information is easy to collect in order to verify a balanced sample and rule out alternative explanations. In control schools, we will include several additional questions to ask about the types of adolescent literacy professional development these teachers are receiving. This will allow us to describe the contrast conditions in the counterfactual schools in regard to reading interventions that might mimic Project CRISS. The treatment school version of the teacher questionnaire is presented in Appendix E and the control school version in Appendix F.
A brief principal questionnaire will be used in treatment and control schools. In treatment schools, questions will ask about background and tenure at the school plus several questions about specific Project CRISS activities recommended for principals, such as attending trainings and conducting classroom walk-throughs using a CRISS protocol. In control schools, principals will also be asked about their background and tenure plus several descriptions describing any ongoing school- or district-wide professional development activities around adolescent literacy. This will allow us to describe qualifications, experience, and tenure in both treatment and control schools to ensure balanced schools in terms of basic school leadership variables. Additionally, we will be able to describe the contrast conditions in the counterfactual schools in regard to reading interventions that might mimic Project CRISS. The treatment school version of the principal questionnaire is presented in Appendix G and the control school version is presented in Appendix H. Appendix I presents detailed justification for items in the LF, teacher, and principal questionnaires.
The teacher and principal questionnaires will be administered once per year for each of the two Project CRISS intervention years to assess background conditions. The LF log is intended as a monthly data collection instrument over two years. The questionnaires and LF log will be offered to schools as Web-based surveys to improve efficiency of data collection.
Estimation Procedures
This is a cluster randomized trial (CRT) with a baseline measure, consisting of two levels of clustering, i.e., students are nested within a school (student at Level 1, school at Level 2).2 The unit of assignment is the school and the study will involve a single cohort of ninth grade students who will be tested at two time points: baseline (pre-test) and post-test. The overall design is represented as follows:
Experimental R (School) Ob X O
Control R (School) Ob O
R (School) stands for randomization by the School cluster.
O stands for the observation, with Ob meaning baseline observation.
X stands for the treatment to the student participating in the study, which occurs during the second year of teacher training.
In the treatment schools teachers will receive intensive Project CRISS training during the first year and additional training and follow-up services during the second year of the treatment. During this second treatment year—by which time teachers should understand the principles and mechanics of Project CRISS strategies and be implementing them in the classroom—ninth-grade students will be tested pre/post during the school year on reading comprehension.
We will use HLM for modeling a hierarchical data structure in this study. Typically, an HLM approach involves constructing and testing multiple models for a given set of data. Still, it is always preferable to use the simplest possible model as long as its fit to the data at hand is good. Therefore, we plan to test our hypothesis using a 2-level CRT with a Level 2 covariate, which is presented below. To illustrate how this simple model could be expanded if necessary, a generalized 2-level CRT is also presented.
We will include all students with post-test scores in the analyses. For students who do not complete the pre test, their baseline scores will be imputed.
Two-level CRT with Level 2 Covariate
Level 1 Model (i.e., Student Level Model)
Yij = β0j + eij eij ~N(0, σ2)
Level 2 Model (i.e., School Level Model)
β0j = γ 00 + γ 01Wj + γ 02Sj + u0j u0j ~N(0, τ|s)
where:
Yij: outcome measure of student i at school j.
β0j: mean outcome measure of students at school j.
eij: residual associated with each student. It is assumed to be normally distributed with the mean of 0 and the variance of σ2.
γ00 : grand mean for the outcome measure.
γ 01: treatment effect.
Wj: indicator variable. Treatment group is indicated by 0.5; control group, -0.5.
γ 02: coefficient for the school-level covariate.
Sj: school-level covariate, which is the school mean for the previous year’s 9th grade test score.
u0j : residual associated with the school mean of the outcome measure. It is assumed to be normally distributed with the mean of 0 and the variance of τ|s.
In this model, the test score of a student is defined as the school-level mean plus the random error associated with each student. The school-level mean, in turn, is defined as the grand mean plus the effect of the treatment plus the random effect associated with the school. The school-level mean, however, is adjusted for the covariate which is the previous year’s school mean.
The random assignment to conditions will take place at the school level in the study. Consequently, the treatment effect will show up at the school level. Reducing the error variance at the school level, therefore, will result in the gain in the power.
If the data are reasonably balanced (i.e., if the variation in the number of students per school is small), the 2-level CRT with Level 2 covariate could be approximated with a traditional ANCOVA model. That would enable us to analyze the data using a 2-level nested mixed-model ANCOVA, in which the treatment and the school level covariate are entered as fixed effect variables, whereas the cluster (school) is entered as the random effect variable. However, more likely our data will be unbalanced, and we will stay with the HLM.
We are currently investigating the advantages and disadvantages of cluster-mean centering versus grand-mean centering in specifying the model. In the above, the model is specified utilizing cluster-mean centering.
Generalized Two-level CRT
The model can be expanded by identifying and entering more covariates. Such covariates can be either at the student or school level. The following shows the general two-level HLM with multiple covariates.
Level 1 Model (i.e., Student Level Model)
Yij = β0j + β1ja1ij + β2ja2ij + …+ βpjapij +eij
Level 2 Model (i.e., School Level Model)
β0j = γ00 + γ01W1j + γ02W2j + … + γ0sWsj + u0j
β1j = γ10 + γ11W1j + γ12W2j + … + γ1sWsj + u1j
:
:
βpj = γp0 + γp1W1j + γp2W2j + … + γpsWsj + upj
Theoretically any number of covariates could be entered. However, more covariates means that a larger sample will be necessary to estimate the coefficients. Given the limitation in the sample size we can afford, especially in the context of experimental studies in which the effects of covariates are randomly distributed across conditions, our plan is to use the simplest model that reflects the data structure accurately enough. This means exercising due diligence in choosing only good covariates. Demographic information from the SDRT-4 could be used to identify possible covariates for the student level. CCD data could be used to identify possible covariates for the school level.
Statistical Power Estimates
We performed a power analysis for detecting a main effect of treatment on the student outcome, using the Optimal Design Software Version 1.55 (2005), developed by Raudenbush, Spybrook, Liu & Congdon. The goal of the power analysis was to estimate the necessary number of schools to sample, in order to maintain the power of 0.8 for a minimum detectable effect size (MDE) in the range of δ = 0.10 to 0.25. The MDE was thought about carefully in discussions with our technical working group and methodological advisor. While there is no absolute standard for MDE, the range we chose is regarded as an educationally significant and policy relevant effect size that we would not want to miss detecting.
For a power analysis of a 2-level Cluster Randomized Study, the estimate of intra-class correlation coefficient (ICC) is necessary but such data are rarely available. Consequently, we are attempting to empirically derive the ICC value through conversations with Harcourt—the publisher of the Stanford Diagnostic Reading Test—to calculate the ICC for the power analysis of the proposed 2-level design (students nested within school). They have agreed to assist us in deriving the ICC calculated from the ninth-grade data of the SDRT-4 national standardization study (1994–95), which will be used for the final power analysis. In the interim, values were assumed both for the ICC and the expected explanatory power of the statistical model (R2L2) using data from another study currently being developed by NWREL. The results are presented in Exhibit 3.
Exhibit 3. Power Analysis
Unconditional ICC |
R2L2 |
Number of Schools |
Minimum Detectable Effect Size (at power of .8) |
0.15 |
0.67 |
214 |
.100 |
96 |
.150 |
||
60 |
.190 |
||
56 |
.200 |
||
36 |
.250 |
Based on this analysis, we believe that a 30/30 study, or 60 total experimental and control schools, will result in the detection of a relatively small effect size of 0.19. It should be noted that beyond this point, there is a clear cost/benefit reduction in using larger sample sizes to obtain a smaller MDE. We believe that 60 schools in the final sample is a reasonable and cost-effective number for detecting a modest yet educationally significant effect of Project CRISS.
Quality Control Procedures
Prior to the planned data analyses, the data file will be examined for errors such as coding mistakes, duplicate records, and missing records. The routine procedure we employ for identifying errors in the data file includes running Access queries such as find/delete duplicates, Excel commands such as sort and filter. We also utilize SPSS descriptive analyses such as frequency tables and histograms to identify any anomalies in the data file. Once identified, suspect records that contain errors will be corrected to the extent possible. Where such correction is not feasible, the said records will be removed from the data file as long as the instances of errors appear to occur randomly.
Unusual Problems Requiring Specialized Sampling
There are no such unusual circumstances.
Use Of Periodic Data Collection Cycles to Reduce Burden
This is a one-time research study.
Methods to Maximize Response Rates and to Deal with Issues of Non-response
The study will be implemented under a schoolwide initiative. Therefore, we do not expect the problem of low response rate typically associated with voluntary surveys and interviews. Still, teacher and student attrition is a concern in any study. Teacher and principal questionnaires are intentionally short to maximize response rate. We will work with the Project CRISS trainer to distribute information about the on-line teacher questionnaire and encourage teachers to complete the information as part of the overall Project CRISS effort. Reminders will be given to teachers and principals to complete their questionnaires. Student testing is limited to the Reading Comprehension Subtest so that the test can be administered in a typical high school period with intact classes which should help with the response rate. Finally, reminder calls will also be used before observation visits. The data collection instruments will be discussed during the recruitment of schools and be written into the formal agreements with schools as a condition of receiving Project CRISS at no cost.
The planned sample of teachers and students are large enough to withstand naturally occurring attrition during the study. Teacher attrition will be monitored through personnel records and the project database. Student attrition, as well as crossovers and students who enter study schools after baseline, will also be monitored. The primary aim here will be to detect any systematic attrition or crossovers between the conditions. Note that the design of the study makes it impossible for student crossovers to happen within a school, as school is the unit of assignment. Crossovers could only occur if students transfer to a different school during the study year. Most of these rural districts will be single high-school districts that are geographically spread over a four-state region. This will help minimize such violations of random assignment.
The requirement of parental consent for the use of student data is another source of student loss. We anticipate passive parent consent by our Institutional Review Board given the low-risk of student harm and because we will not be asking for student identifiers when transmitting achievement test scores to NWREL. If active consent is required by specific districts, some parents will fail to return the consent form, and that could result in a considerable loss of students. Student-level baseline achievement data will be analyzed for all students at the study schools (both treatment and control), to assess whether the final samples of students are representative of the initial sample.
The parental consent (see Appendix K for a passive consent form) will be sent out after the random assignment of schools into conditions. Consequently, the rate of student non-participation will be compared between the treatment and the control conditions to examine its effect on student loss.
Pilot Testing of Instruments
We have pilot tested and refined the implementation and teacher observation instruments described above in three pilot schools. The thre pilot schools are receiving Poject CRISS training under observation of the researchers. This allows us to develop and refine sensitive instruments for measuring implementation and teacher practice. We have pilot tested the LF log and teacher and principal questionnaires with small numbers of respondents Our pilot testing has involved fewer than 10 subjects per role group. The pilot schools have also provided classrooom opportunities to train observers for the VCOT-CRISS and establish reliability standards for observers as we work with the VCOT developer/trainer.
Contractor Name Responsible for Design, Analysis and Data Collection for the Study
This study will be conducted by the Northwest Regional Educational Laboratory, under the Regional Educational Laboratory Contract with the Institute of Education Sciences, U.S. Department of Education. Chesapeake Research Associates (CRA) is providing consultation services around research design and analysis for the study. Several members of the NWREL Technical Working Group (TWG) are also providing advice on research design and analysis for the study. These individuals are listed below.
Jim Kushman |
Principal Investigator |
NWREL |
503-275-9569 |
Maureen Carr |
Research Analyst |
NWREL |
503-275-9154 |
Jacqueline Raphael |
Research Analyst |
NWREL |
503-275-9616 |
Makoto Hanita |
Statistical Analyst |
NWREL |
503-275-9628 |
Michael Puma |
Statistical Consultant |
CRA |
410-897-4698 |
David Connell |
Statistical Consultant |
CRA |
410-897-4968 |
Hans Boss |
Advisor |
TWG |
510-465-7884 x217 |
Samuel Stringfield |
Advisor |
TWG |
502-852-0615 |
Dan Goldhaber |
Advisor |
TWG |
206-543-5955 |
LIST OF APPENDICES
NOTE: EACH APPENDIX IS SUBMITTED AS A SEPARATE ELECTRONIC DOCUMENT.
Appendix A: Schedule of Project CRISS Services
Appendix B: Vermont Classroom Observation Tool as Applied to Project CRISS
Appendix C: VCOT-CRISS Observational Protocol and Scoring Rubric
Appendix D: CRISS Local Facilitator Log of Activities
Appendix E: Teacher Questionnaire (Treatment Schools)
Appendix F: Teacher Questionnaire (Control Schools)
Appendix G: Principal Questionnaire (Treatment Schools)
Appendix H: Principal Questionnaire (Control Schools)
Appendix I: Justification for Items in the Teacher, Local Facilitator, and Principal Questionnaires
Appendix J: Federal Register Notice
Appendix K: Parent Information and Consent Form
Appendix L: NWREL Confidentiality Pledge
1 The NCES locale code designations can be found on pages A-4 and A-5 of the following Web document: http://nces.ed.gov/ccd/pdf/psu031agen.pdf.
2 Note that students are not nested within a teacher or a classroom. This is because each high school student will receive the CRISS treatment from several teachers across core subject classes.
File Type | application/msword |
File Title | An Investigation of the Impact of a |
Author | NWREL User |
Last Modified By | Sheila.Carey |
File Modified | 2007-10-24 |
File Created | 2007-10-24 |