Title II, Part D of the ESEA (P.L. 107-110) has the explicit purpose “to enhance the ongoing professional development of teachers, principals, and administrators by providing constant access to training and updated research in teaching and learning through electronic means” as well as “to support the development and utilization of electronic networks and other innovative methods, such as distance learning, of delivering specialized or rigorous academic courses and curricula for students in areas that would not otherwise have access to such courses and curricula, particularly in geographically isolated areas.” Online learning is a growing part of educational offerings at every level of the education system, and policymakers and practitioners need to understand the conditions and practices associated with effective use of online learning.
Numerous studies have established the fact that students can learn at a distance—typically as well as students meeting with their teachers face-to-face—but much of the research examining the relative effectiveness of different online learning approaches and practices is riddled with poorly designed studies and a lack of objectivity. While good studies do of course exist, they are often found in literature related to training or adult populations and are rarely connected to K-12 populations. Moreover, the increasingly common practice of blended learning—in which online learning activities supplement face-to-face instruction—has not been separated from studies of learning taking place entirely online in a meta-analysis or other multi-application study of effects on learning. Despite the absence of a well-established evidence base, the use of online learning in K-12 education and teacher education is expanding astronomically. Administrators choosing to invest in online learning and the practitioners responsible for its implementation need guidance concerning the conditions and practices associated with effective use.
Most of the literature on online learning describes a single application. Of the few multi-application studies that do exist, the research consists of surveys of school and district administrators. This study represents the first effort to conduct relatively large-scale case study work comparing different applications as designed and as implemented. This work will be distinguished also by inclusion of input from a wide range of stakeholders, including developers, administrators, instructors, and students.
The proposed case studies will build on the contractor’s literature review by investigating approaches found to be differentially effective in prior research, in order to provide more detailed descriptions of the practices identified as associated with effectiveness in the literature review. In addition, recognizing that technology development moves much more quickly than does rigorous research, the contractor will include case studies of emerging online learning approaches that have a learning theory rationale and endorsement from experts in the Technical Work Group. These latter cases are likely to involve online gaming environments and other interactive modes of peer-supported learning. Objective descriptions of such applications and their effects on learning are notably absent in the current research literature. The case studies will provide rich descriptions of the context of use and implementation practices for all of the applications.
The data collection activities to be conducted by this study will provide three types of products for the education community:
reader-friendly research syntheses and final evaluation reports,
recommendations for future research, and
tools and instruments for use by schools, districts, and states in evaluating online courses.
The research syntheses and evaluation reports developed through this study will highlight effective, research-based practices that educational practitioners and policymakers can use to guide implementation of online offerings. They will include illustrative case studies to convey rich details associated with successful practices. The reports will also produce checklists and other tools to support the training, implementation, and evaluation activities of districts and states. The protocols for virtual site visits and associated coding rubrics will be of particular use to districts and states, as many places are grappling with how to ascertain the quality of online offerings. Finally, this study will also help guide future research. It will provide a framework for organizing the K-12 and teacher professional development online learning literatures and a synthesis of recent research on emerging uses of online learning in blended enhancement approaches as well as stand-alone replacement applications.
The contractor will use a variety of advanced information technologies to maximize the efficiency and completeness of the information gathered for this evaluation and to minimize the burden that the evaluation could potentially place on respondents. For example, members of the study team will collect demographic and other descriptive data about online implementations and the schools and districts that use them by accessing Websites and online databases. This practice will significantly reduce the amount of information that will need to be gathered through interviews.
During the data collection period, an e-mail address will be available to permit respondents to contact the contractor with questions or requests for assistance. The e-mail address will be printed on all the data collection instruments, along with the name and phone number of a member of the data collection team.
Finally, in order to gather data for online applications that occur entirely online and outside of traditional school settings, the contractor will collect observation data online. If the application is entirely online, there is no geographic place to conduct the observation. As the action takes place online, researchers will log in and conduct site visits in the virtual, online space. If instructors for a single online application are geographically dispersed, interviews will be conducted by telephone or over email, as it is inefficient to travel for a single interview. In all cases, the contractor will make objective observations of each online activity to make independent evaluations of the functionality of the technology and its use.
The case study work will be informed by a systematic review of the research literature, including a meta-analysis that will provide an empirical basis for many of the conditions and practices that will be examined in the case studies.
We are also working to minimize burden by excluding potential case study schools and districts that are also part of other Department educational technology evaluations. Instrumentation will be coordinated across Department studies to prevent unnecessary duplication (e.g. no repeating questions for which sufficient data are already available).
We have not yet selected the final set of implementations to be studied. It is possible that one or more entities will be a small entity. Participation in the study will be voluntary, and the developer and associated school sites will be free to decline participation if doing so is determined to be of too high a burden. In addition, as mentioned above, the contractor will make every effort to gather available information on the Web and through electronic means in order to reduce the burden on all respondents, including those from small entities.
As already noted, online learning practices are prevalent and evolving quickly. In addition, current purposes expressed in Title II, Part D of ESEA promote the use of distance education for K-12 courses and teacher professional development activities. If information from this study is not collected, policymakers and educators will not have adequate information to inform the conditions and practices in which online implementations are most likely to be effective. This could result in inefficient use of resources for the design and implementation of online learning applications. This is one of a very few studies to conduct systematic site visits for more than one application, suggesting that findings from this study will be more robust across applications and generally applicable to the field of online learning. The evaluation report developed through this study will highlight effective, research-based practices that educational practitioners and policymakers can use to guide implementation of online offerings. This type of guidance on how to make instructional decisions based on evidence helps accomplish the NCLB goal of providing technical assistance for state and local educational authorities.
None of the special circumstances listed apply to this data collection.
A notice about the study will be published in the Federal Register when this package is submitted to provide the opportunity for public comment. In addition, throughout the course of this study, the contractor will draw on the experience and expertise of a technical working group (TWG) that provides a diverse range of experience and perspectives, including representatives from the district and state levels, as well as researchers with expertise in relevant methodological and content areas. The members of this group and their affiliations are listed in Exhibit 3. The first meeting of the technical working group was held on January 18, 2007, the second is planned for Winter 2007, and the third is planned for Fall 2008.
Exhibit 3. Technical Working Group Membership
Member |
Affiliation |
Bob Bernard |
Concordia University |
Richard Clark |
University of Southern California |
Dexter Fletcher |
Institute for Defense Analysis |
Katherine Johnson |
Minnesota Department of Education |
Susan Patrick |
North American Council for Online Learning |
Kurt Squire |
University of Wisconsin-Madison |
Bill Thomas |
Southern Regional Education Board |
Bob Tinker |
Concord Consortium |
Julie Young |
Florida Virtual School |
No payments or gifts will be provided to respondents.
Responses to this data collection will be used only for statistical purposes. The reports prepared for this study will summarize findings across the sample and will not associate responses with a specific district or individual. The contractor will not provide information that identifies a subject or district to anyone outside the study team, except as required by law.
The contractor recognizes the following minimum rights of every subject in the study: (1) the right to an accurate representation of the right to privacy, (2) the right to informed consent, and (3) the right to refuse participation at any point during the study. Because much of the Policy Division’s education research involves collecting data about children or students, the contractor is very familiar with the Department’s regulation on protection of human subjects of research. In addition, the contractor maintains its own Institutional Review Board. All proposals for studies in which human subjects might be used are reviewed by the contractor’s Human Subjects Committee, appointed by the President and Chief Executive Officer. For consideration by the reviewing committee, proposals must include information on the nature of the research and its purpose; anticipated results; the subjects involved and any risks to subjects, including sources of substantial stress or discomfort; and the safeguards to be taken against any risks described.
The contractor’s project staff has extensive experience collecting information and maintaining confidentiality, security, and integrity of interview and survey data. In accordance with the contractor’s institutional policies, confidentiality and data protection procedures will be in place. These standards and procedures for case study data are summarized below.
Project team members will be educated about the confidentiality assurances given to respondents and to the sensitive nature of materials and data to be handled. Each person assigned to the study will be cautioned not to discuss confidential data.
Respondents’ names and addresses will be disassociated from the data as they are entered into the database and will be used for data collection purposes only. As information is gathered on individuals or sites, each will be assigned a unique identification number, which will be used for printout listings on which the data are displayed and analysis files. The unique identification number also will be used for data linkage.
Participants will be informed of the purposes of the data collection and the uses that may be made of the data collected. All case study respondents will be asked to sign an informed consent form (see draft in Appendix B).
Access to the database and case study notes will be limited to authorized project members only; no others will be authorized such access. Multilevel user codes will be used, and entry passwords will be changed frequently.
All identifiable data (e.g., interview notes) will be shredded as soon as the need for this hard copy no longer exists.
Reports to the Department or any employee of the Department concerning case study activities will contain no individual or school or district identifiers. Participating schools will be acknowledged in the final report for their cooperation, but they will not be identified in the text of any report unless model practices are highlighted, in which case permission will be obtained from administrators or course developers before the information is included in reporting.
All case study participants will be assured of confidentiality to the extent possible in the initial invitation to participate in the study (see drafts of notification letters in Appendix C), and this assurance will be reiterated at the time data collection begins (i.e., when each respondent is presented with an informed consent form). While most of the information in the final report will be reported in aggregate form, as noted above, there may be instances where specific examples from the case study data will be utilized to illustrate “best practices”. In these instances, additional permission will be obtained from the administrator or course developer and the specific report text will be reviewed by instructor or development staff prior to publication. This is an approach frequently used in developing technical assistance materials developed by the Department.
No questions of a sensitive nature will be included in the site visit protocols.
As described above, several types of data collection are intended including document analysis, interviews, observations, and researcher interaction with the applications themselves. In this section, we focus only on those parts of the data collection that add to respondent burden. The estimates in Exhibit 4 reflect the burden for the developer and site selection and notification of study participants, as well as the case study data collection activities. Course developers will be notified of selection and asked about their willingness to participate.1 Course developers will also be interviewed about the characteristics of their online implementation.2 Similarly, administrators of selected sites will be notified and asked to participate in the study. Once assent is secured, administrators will be interviewed about the online implementation as it occurs at their site. Instructors will also be interviewed, and student perspectives will be collected during focus groups.
Exhibit 4. Estimated Burden for Site Selection and Notification
Type |
Total No. |
Hr. per participant |
Total number of hours |
Cost Per Hour |
Estimated burden |
Course Developers (notification) |
20 |
0.5
|
10
|
$40 |
$400 |
Product Screening (20 K12 phone calls, 20 TPD phone calls) |
40 |
0.25 |
10 |
$40 |
$400 |
Course Developer (interviews – 3 per developer) |
60 |
1 |
60 |
$40 |
$2,400 |
Administrator (notification) |
40
|
0.5
|
20
|
$40 |
$ 800
|
Administration (interview) |
40
|
1.0
|
40
|
$40 |
$1,600 |
Online Instructors (interviews – 3 per developer) |
60 |
1.0 |
60 |
$25 |
$1,500 |
Students (focus groups – 1 per site with average 8 students per group) |
160 |
1 |
160 |
$5 |
$800 |
Total |
420 |
|
360 |
|
$7,900 |
Half of these will be courses for K-12 students, and half will be teacher professional development offerings.
There are no additional respondent costs associated with this data collection other than the burden estimate provided under B.12 above.
The annual costs to the federal government for this study, as specified under contract, are:
Fiscal Year 2007 $ 429,291
Fiscal Year 2008 $ 467,931
Fiscal Year 2009 $ 254,158
Total $1,151,380
This request is for a new information collection.
During the summer of 2008, the data collection team will complete analysis of case study data. These data will provide a more in-depth look at the ways that online implementations are being used and the conditions and practices in which they show effectiveness. The contractor will analyze instructional observations and developer, administrator, instructor, and student data. The contractor will code data for conditions, including characteristics of the learners and learning content, and for practices, including synchronicity, technology media and delivery, learning experience type, setting, and duration and intensity using formal protocols that have been developed with expert input from the TWG. Coding the observational data for conditions and practices will allow the contractor to categorize the online offering within the four dimensions of the original conceptual framework (learning experience type, synchronicity, replacement of face-to-face learning, and enhancement of face-to-face learning). Case study data will ultimately be incorporated into the project’s final evaluation report.
Site visits to twenty online application developers will be conducted. The contractor will visit both typical and exemplary sites where each offering has been implemented in order to compare the practices across these two settings and isolate practices associated with exemplary sites. At developer site visits, we expect to meet with between three and eight individuals, including directors and senior officials, instructional developers, evaluators or quality assurance personnel, and teachers, if available. At school-based sites, we expect to meet with program administrators, information technology specialists, between one and three instructors, and a focus group of between three and eight students who are currently using the application. During virtual site visits, we will interview one or two instructors and conduct a focus group consisting of three to eight students.
Through interviews and observations, users at these implementation sites will help identify particular practices as helpful or detrimental, and what additional supports might be necessary. When user data is triangulated with direct observations and outcome data, the questions of “why” and “how” can be addressed. The case studies will provide researchers with important information about the design of these online applications. Site visits will also allow researchers to obtain detailed information concerning implementation of the offering, as well as on support practices of schools, instructors, and students. This level of detail would not be possible to collect through survey data.
Before conducting the site visits, the contractor will collect and review relevant documents (e.g., information about an implementation or site that is available online). The analytic process will begin as the contractor uses these documents in conjunction with the conceptual framework to generate initial hypotheses about the features, capabilities, and practices associated with each online learning application.
Analysis will continue during the site visits as researchers gather data and compare findings with the study’s conceptual framework. Two researchers will conduct each site visit, and throughout the visit, the team will informally discuss their initial impressions about key features of the online learning implementation and the degree to which the emerging story matches or contradicts study hypothesis. More formally, the site visitors will meet each day of the visit to go through the case study debriefing form and formulate preliminary responses.
Once each site visit is completed, site visitors will draft case study reports. Drafting such reports requires researchers to reduce their field notes to descriptive prose within the structure of a formal debriefing form. This translation of field notes to a case study report involves sorting all of the available data at each site (including interviews, observations, and document reviews) by topic areas that define the sections of the protocols and debriefing forms.
To facilitate the analysis of the qualitative data, ATLAS.ti qualitative data analysis software will be used to store, code, and organize all of the interview and observation data. ATLAS has advantages over other qualitative analysis software in that it was designed for online use by multiple coders and analysts. ATLAS will allow structuring the data set by site or by offering (i.e., grouping data from the three related site visits for each offering). Using ATLAS, the contractor’s researchers will perform the initial coding of the data reporting templates for each school, using a coding scheme linked to the site visit protocols and developed before the first site visit. Additional codes will be developed that correspond to new phenomena unaccounted for previously and will be integrated into the coding scheme throughout the course of the data collection. Using a variety of ATLAS-generated reports (e.g., code frequency counts, memo lists), researchers will examine the range of categories and begin answering this study’s evaluation questions. Staff have extensive experience with qualitative data coding, particularly using ATLAS.ti.
The goal of the cross-site analysis is to compare, contrast, and synthesize findings and propositions from each case, in order to make general statements about the larger sample. The contractor will use a three-step hierarchical procedure for analyzing the data during the cross-site analysis.
Step 1: The cross-site analysis process will begin with a team debriefing meeting involving all site visitors. Debriefings of this type are an efficient means of developing themes that will govern cross-site analyses. This process will focus on general themes across the thirty cases, identifying general statements that address the evaluation questions on promising practices in online learning. These themes will be added as codes in the ATLAS coding structure.
Step 2: Once the data capture forms have been coded, major topics and themes will be divided among the researchers who will focus their attention on more in-depth analysis. Each analyst will have the entire set of case reports available in ATLAS. Analysts will conduct ATLAS queries on their assigned issues and topics to pull up relevant examples and descriptions across the thirty cases. They will be able to develop additional codes during this process and add them to the data set at this stage.
Step 3: Researchers will document the specific cases that account for interesting findings and pursue a particular theme by synthesizing important details, making explanatory judgments, and refining previous hypotheses and propositions. The researchers will draft relevant text that describes and explains the findings relevant to the cases for the final reports.
Once the cross-site analysis has been completed and researchers have interpreted and refined the data from the case studies, the contractor will incorporate these findings into two final evaluation reports; one report will focus on K-12 outcomes, while the other will focus on the findings from the teacher professional development case studies. These reader-friendly documents will ultimately provide research-based guidance to policymakers, administrators, and educators on how to optimally implement online learning for K-12 education and teacher preparation.
All data collection instruments will include the OMB expiration date.
No exceptions are requested.
1 The term “course” is used throughout this section for the sake of brevity. In actuality, the online learning offering under study could be an entire program or a supplemental resource rather than a single, stand-alone course.
2 As mentioned above, developer reports will be verified through independent assessment of application features and capabilities during both instructional observations and through independent interaction with the application by researchers.
File Type | application/msword |
Author | Policy User |
Last Modified By | DoED |
File Modified | 2007-10-03 |
File Created | 2007-10-03 |