Statement for Paperwork Reduction Act Submission
Part A: Justification
Program Performance Data Audits Project
Contract ED-04-CO-0049
Office of Planning, Evaluation, and
Policy Development
A.1. Explanation of Circumstances That Make Collection of Data Necessary 1
The Need for Program Performance and Local Evaluation Audits 1
Overview of Program Performance Data Audits Project 2
Purpose for Collecting the Information 9
How the Information Will Be Collected 9
Staff Who Will Collect the Information 11
A.3. Use of Improved Information Technology to Reduce Burden 11
A.4. Efforts to Identify and Avoid Duplication 11
A.5. Efforts to Minimize Burden on Small Business or Other Entities 11
A.6. Consequences of Less-Frequent Data Collection 11
A.8. Federal Register Comments and Persons Consulted Outside the Agency 12
A.9. Payments to Respondents 12
A.10. Assurance of Confidentiality 12
A.11. Questions of a Sensitive Nature 13
A.12. Estimates of Respondent Burden 13
A.13. Estimates of the Cost Burden to Respondents 15
A.14. Estimates of Annualized Government Costs 15
A.15. Changes in Hour Burden 15
A.16. Time Schedule, Publication, and Analysis Plan 15
Time Schedule for Activities and Deliverables 15
Analysis Plans for Interview Data 17
A.17. Display of Expiration Date for OMB Approval 20
A.18. Exceptions to Certification Statement 20
Appendix A. Legislation Authorizing Program Performance Audit 21
Appendix B. Grantee, Contractor, and Local Evaluator Protocol—Discussion Guide 22
Table 1. Research Questions and Subquestions 5
Table 2. Total Number of Grantees for Programs Included in the Study 10
Table 3. Estimates of Grantee and Contractor Total Burden 14
Table 8. Average Data Entry Error Rate by Type of Grantee Data Quality Control Activity 19
This clearance request is submitted to OMB for the Office of Planning, Evaluation, and Policy Development’s (OPEPD’s) audit of grant program procedures for collecting, analyzing, and reporting performance and evaluation data. This request is necessary because OPEPD within the U.S. Department of Education (ED) has contracted with Decision Information Resources, Inc. (DIR) and Mathematica Policy Research, Inc. (Mathematica) to assess the procedures for collecting and reporting program performance and evaluation data for eleven ED grant programs. These audits and assessments will provide ED with insight into (1) whether the programs’ performance data are of high quality and the methods used to aggregate and report those data are sound; and (2) whether the local evaluations conducted by grantees (or their local evaluators) are of high quality and yield information that can be used to improve education programs.
This OMB submission requests approval for the use of interview protocols for collecting information from program grantees and their local evaluators and program office contractors. All interview guides are designed to address the major research questions associated with this project. All other data used to address the audit’s research questions will come from sources that will not require OMB approval.
This section describes the need for program performance and local evaluation audits and gives an overview of the program performance and evaluation audits project.
Federal legislation (see Appendix A) authorizing activities to improve the quality of elementary and secondary education programs1 recognizes the value of collecting and reporting high-quality performance measurement data and evaluation findings to help inform decisions. The U.S. Government Accountability Office (GAO) recognizes that program performance measures and program evaluation play key roles in the Program Assessment Rating Tool (PART), which the Office of Management and Budget (OMB) uses to examine federal programs. Additionally, the Government Performance and Results Act of 1993 (GPRA) recognizes and encourages a complementary role for performance measurement and program evaluation.2
However, applying performance measurement and program evaluation to the task of improving educational programs inevitably confronts real-world circumstances. Performance measurement is a complex and daunting challenge for federal agencies. As a term, “performance measurement” refers to the measurement of program inputs, outputs, intermediate outcomes, or end outcomes. Decisions about what to measure should reflect the intended use of the data for decision making and the relative priority of issues such as program efficiency, equity, and service quality.3 Beyond the challenging decision of what to measure, a successful performance measurement system depends on the accurate collection, tabulation, and analysis of the data or indicators used. Assessing the effectiveness of federal programs across all states and grantees requires federal agencies to develop clear and precise definitions to inform the collection of grantee data and to effectively communicate these definitions to the appropriate entities responsible for data collection. It also requires precise specifications for calculations that grantees or the federal agency must perform which must also be communicated effectively.
Risks to data quality increase as the layers of reporting become more numerous. OMB recognized in its 2008 Circular No. A-11 that agencies need to develop and implement techniques for validating and verifying the performance measurement data that they are using.4 Complying with that circular for the verification and validation of performance data requires agencies to design and implement thorough and effective data quality methods. Accordingly, there is a need for ED to conduct direct checks on the collection and reporting of performance data collected through their grant programs and to assess the quality and usefulness of these data.
Similarly, program evaluations can play an important role in improving educational programs. Evaluations and research conducted by grantees funded to implement educational programs have the potential to give substantial insight into the efficacy and effectiveness of programs as they are implemented in a variety of locations and conditions. However, challenges exist with the use of evaluations, and the actual value of these evaluations depends on the rigor of the research that is conducted as well as the reporting of the results.
Department grants often include requirements for local evaluations. The evaluations may be conducted by entities independent of grantees, referred to as local evaluators. These evaluations may include the collection of data required for reporting on the program’s performance measures, as well as the collection of other types of data at the discretion of grantees, including implementation, outcome, and impact data. Although requirements for the conduct of local evaluations are prevalent, little is known about how the data are collected and how the results of these evaluations are used by the program offices or by the individual grantees. Accordingly, there is a need for ED to examine the quality and use of local evaluation data in order to determine their usefulness in improving educational programs.
The audits to be conducted through this contract will provide ED with insights into the strengths and weaknesses of the program performance and evaluation data collection, guidance, and reporting systems and provide recommendations for improving data quality and usefulness. The premise for this work is that accurately collected, reported, and aggregated data on grant program performance—that is, data that measures the appropriate performance dimensions of program goals—are necessary for ED to evaluate the quality and outcomes of their programs. The program performance data audits to be conducted for a subset of ED programs will specifically address the quality of the data collection, analysis, aggregation, and reporting systems that budget and program offices and grantees used to produce performance and evaluation data. The audit will be guided by the following research questions and their related subquestions, as defined by ED:
Research question 1. Are the data upon which programs measure performance of high quality and are the methods in use to aggregate and report on those data sound?
Subquestion 1a: What are the most common patterns, opportunities, and challenges associated with the flow, sequence, and processes for collecting and reporting high quality and useful program performance data?
Subquestion 1b: What are the key variations—across programs—in the flow, sequence, and processes for collecting and reporting high quality and useful program performance data?
Research question 2. Are the local evaluations conducted by grantees (or their local evaluators) yielding useful information?
Subquestion 2a: What are the most common patterns, opportunities, and challenges associated with the flow, sequence, and processes for collecting and reporting high quality and useful evaluation data?
Subquestion 2b: What are the key variations—across programs—in the flow, sequence, and processes for collecting and reporting high quality and useful evaluation data?
To address these research questions, the Program Performance Data Audit (PPDA) project will conduct three types of audits that focus on data quality for the programs that ED selected for review.
Data-Entry Audit. Grantees may submit their data to the program office electronically or on hardcopy. No matter how grantees submit them, data must be transferred into the program’s or department’s aggregation system.5 The data-entry audit will assess the accuracy of the data in the aggregation system by comparing grantees’ reported data to the data in the aggregation system. In addition, we will perform “face validity” verification, the simplest type of data verification, as part of the data-entry audit. This type of verification uses the reviewer’s judgment and knowledge about the data being reviewed to assess the reasonableness of the data. The two most basic face validity tests ask (1) whether the data exceeds reasonable upper or lower bounds and (2) whether there is missing data. For example, the size of the participant population in a program or the knowledge of past results could provide an upper bound when determining the face validity of data. If a grantee serves 1,000 participants, the reviewer can be certain that any subgroup of participants would be 1,000 or fewer. Any subgroup with more than 1,000 participants would fail the face validity verification test. The missing value test is straightforward. If the grantee submits no data for a required data element, it fails the missing data test.
Data-Aggregation Audit. After all the grantees’ reported data are in the aggregation system, the data are aggregated to calculate program-level performance. To determine whether the aggregation system accurately aggregates the data, the data-aggregation audit will compare the results from the aggregation system to results calculated independently by the contractor conducting this audit. The contractor will use the same input data and the same methods6 as those in the aggregation system.
Evaluation Audit. As part of the grants, program offices often include a requirement for grantees to conduct local evaluations. The evaluation audit will seek to determine whether grantees conducted evaluations and, if they did, whether those evaluations produced high quality reports. We will obtain completed evaluation reports when available. The presence or absence of formal evaluation reports will constitute one finding from this audit.
In addition, we have identified and adapted an evaluation quality checklist, entitled “Evaluation Report Checklist,” for reviewing grantees’ evaluation reports. This checklist will be used to assess whether grantees’ reports included key components of high quality evaluations. This checklist was developed by the Western Michigan University Evaluation Center (WMUEC) and draws upon the Program Evaluation Standards (Joint Committee on Standards for Educational Evaluation, 1994) and the What Works Clearinghouse standards.7 We have adapted this checklist to include assessments of the "appropriateness" of some of the evaluation items listed in the checklist.
We chose this checklist to adapt because it has been rigorously developed, peer reviewed (http://www.wmich.edu/evalctr/checklists/editorial-board/), and tailored specifically for the assessment of evaluation reports. Our search for checklists included a literature review and online searches, including the websites for the What Works Clearinghouse and WMUEC. We chose the "evaluation report" checklist from the 30 or more checklists that WMUEC developed (http://www.wmich.edu/evalctr/checklists/) as part of a project they started in 2001.
To conduct these audits and assess the processes that produced the performance measures included in the audits, we will use multiple data sources. These sources include in-depth interviews with ED staff, grantees, and local evaluators, review and verification of data tabulations, and reviews of evaluation reports.
Table 1 lists the data sources that will be used to address the two overarching research questions and their related subquestions.
Table 1. Research Questions and Subquestions
Research Question and Subquestions
|
Data Sources |
||||
Document Reviews |
Interviews with ED Program Office Staff, Budget Analysts, and Contractors |
Interviews with Grantees and Local Evaluators |
Data Entry and Aggregation Methods Review and Verification |
Quality Assessments of Local Evaluation Reports |
|
Research Question #1. Are the data upon which program measure performance of high quality and are the methods in use to aggregate and report on those data sound?
|
|||||
Subquestion 1a. What are the most common patterns, opportunities, and challenges associated with the flow, sequence, and processes for collecting and reporting high quality and useful program performance data?
|
|||||
1. What processes and procedures are used to select and define program performance measures? |
X |
X |
X |
|
|
2. What type of guidance is provided by the program office to grantees to assist in the submission of program performance data? |
X |
X |
X |
|
|
3. What type of support, both training and technical assistance, does the program office provide to assist grantees with the collection and submission of quality program performance data? |
X |
X |
X |
|
|
4. What processes and procedures do grantees use to collect and submit program performance data to the program office? |
X |
X |
X |
|
|
5. Are the data collected and submitted by grantees of sufficient quality to calculate performance measures and assess program performance? |
|
X |
X |
X |
|
6. Does the data collected and submitted by grantees accurately capture the required and non-required program performance measures? |
|
X |
X |
X |
|
7. What procedures are used by the program office to aggregate grantee data to produce program performance results? |
X |
X |
|
X |
|
8. How are program performance results used by the program office and the grantees? |
|
X |
X |
X |
|
Research Question and Subquestions
|
Data Sources |
||||
Document Reviews |
Interviews with ED Program Office Staff, Budget Analysts, and Contractors |
Interviews with Grantees and Local Evaluators |
Data Entry and Aggregation Methods Review and Verification |
Quality Assessments of Local Evaluation Reports |
|
Research Question #1. Are the data upon which program measure performance of high quality and are the methods in use to aggregate and report on those data sound?
|
|||||
Subquestion 1b. What are the key variations—across programs—in the flow, sequence, and processes for collecting and reporting high quality and useful program performance data?
|
|||||
1. Are there differences in the processes for collecting, submitting, and reporting of program performance data across programs? |
X |
X |
X |
|
|
2. Does variation in program performance processes result in different levels of program performance data quality and use? |
|
|
|
X |
|
|
|
|
|
|
|
Research Question and Subquestions
|
Data Sources |
||||
Document Reviews |
Interviews with ED Program Office Staff, Budget Analysts, and Contractors |
Interviews with Grantees and Local Evaluators |
Data Entry and Aggregation Methods Review and Verification |
Quality Assessments of Local Evaluation Reports |
|
Research Question #2. Are the local evaluations conducted by grantees (or their local evaluators) yielding useful information?
|
|||||
Subquestion 2a. What are the most common patterns, opportunities, and challenges associated with the flow, sequence, and processes for collecting and reporting high quality and useful evaluation data?
|
|||||
1. What processes and procedures are used to identify and select non-GPRA program performance measures? |
X |
X |
X |
|
|
2. What type of guidance is provided by the program office regarding how local evaluations should be designed and conducted? |
X |
X |
X |
|
|
3. What type of support, both training and technical assistance, does the program office provide to assist grantees with the conduct of local evaluations? |
X |
X |
X |
|
|
4. What processes and procedures do grantees use to collect and submit local evaluation data to the program office? |
X |
X |
X |
|
X |
5. Are the data collected and submitted in local evaluation reports of sufficient quality to assess program performance? |
|
X |
X |
|
X |
6. How are local evaluation results used by the program office and the grantees? |
|
X |
X |
|
X |
|
|
|
|
|
|
Research Question and Subquestions
|
Data Sources |
||||
Document Reviews |
Interviews with ED Program Office Staff, Budget Analysts, and Contractors |
Interviews with Grantees and Local Evaluators |
Data Entry and Aggregation Methods Review and Verification |
Quality Assessments of Local Evaluation Reports |
|
Research Question #2. Are the local evaluations conducted by grantees (or their local evaluators) yielding useful information?
|
|||||
Subquestion 2b. What are the key variations—across programs—in the flow, sequence, and processes for collecting and reporting high quality and useful evaluation data?
|
|||||
1. Are there differences in the processes for collecting, submitting, and reporting of local evaluation data across programs? |
X |
X |
X |
|
X |
2. Does variation in local evaluation processes result in different levels of local evaluation data quality and use? |
|
|
|
|
X |
|
|
|
|
|
|
As indicated in Table 1, multiple data sources will be used to provide answers to the project’s research questions. However, only the grantee and local evaluator protocol and the program office contractor’s protocol (which is derived from selected modules of the program office protocol) will require OMB approval. In this section, we describe the purpose for collecting information from grantees and their local evaluators and program office contractors, how the information will be collected, and who will collect the information.
The reasons for collecting information are as follows:
To inform our understanding of the methods and procedures used by the grantees to collect and report performance measures.
To increase our understanding of whether (and how) local evaluations are conducted and whether (and how, if conducted) the information from these evaluations is used.
To inform our understanding of the most common patterns, variations, and challenges associated with the data flow sequence for collecting and reporting program performance and evaluation data.
Information obtained from these interviews will be used to help ED assess the quality of the procedures used to produce program performance and evaluation data. This information will help ED target shortcomings in specific aspects of data quality, collection, calculation, aggregation, reporting, or dissemination that, if addressed, can then be used to improve decision making.
The questions included in the protocols solicit information in eight critical process areas associated with the production of performance measures:
A. Derivation, analysis, and reporting of GPRA8 and non-GPRA data |
E. Grantee collection and submission of data |
B. Provision of guidance regarding submission of performance data |
F. Data quality checks and validation of grantee data |
C. Provision of training |
G. Aggregation of grantee data |
D. Provision of technical assistance |
H. Dissemination and use of program performance results |
The contractor will schedule and conduct 60-minute telephone interviews (see Grantee, Contractor, and Local Evaluator Protocol—Discussion Guide in Appendix B) with program grantees and their local evaluators. We anticipate that three respondents will typically be interviewed: the grantee’s project director, the person chiefly responsible for completing annual performance reports, and the local evaluator. In cases where these roles are played by a single person, the actual number of respondents participating will be fewer than three. However, we have estimated burden assuming three interviewees. The contractor will typically conduct these interviews in a group setting. Although we expect that different grantee staff will take responsibility for answering questions according to their role, we will want respondents to arrive at a consensus response to each question, if initial disagreements arise. Interviewers will be trained on how to assist respondents to arrive at a unified response to a question.
The grantees to be interviewed will be selected from eleven programs identified by ED for inclusion in this study, using a sampling strategy developed for this task. The eleven programs, along with the total number of grantees for each program are given in Table 2.
Table 2. Total Number of Grantees for Programs Included in the Study
Program |
Number of Grantees |
Title III National Professional Development Grants |
138 |
Literacy Through School Libraries |
57 |
English Language Acquisition (ELA) Title III State Grants* |
52 |
Voluntary Public School Choice |
14 |
Equity Assistance Centers |
10 |
ELA Native American Alaska Native Children in School Program |
9 |
Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) |
209 |
Perkins Title I – Basic State Grant |
53 |
Perkins Title II – Tech Prep |
53 |
OSERS Part B – State Grants |
59 |
OSERS Part C– Infants and Toddlers |
59 |
* There are 56 ELA State grants, but 4 of them do not report and are excluded from the sample frame. |
For nine of the eleven programs, the contractor will interview all grantees. For the two larger programs, Title III National Professional Development Grants (NPD) and Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), the contractor will interview a sample of grantees.
The contractor will also schedule and conduct 45-minute telephone interviews with contractors that the program office may have used to conduct specific program performance data tasks (e.g. data aggregation, performance measure calculation, etc.). Appendix B contains the potential modules and corresponding questions that we will administer to contractors. We anticipate one contractor for each program and approximately two individuals for each contractor may have been involved with a given task. We will not attempt to interview these individuals separately but will conduct the interviews in a group setting similar to the grantee interviews. Therefore we estimate that the total number of contractors to be interviewed will not exceed eleven and therefore the number of individuals will not exceed 22.
Trained and experienced staff from the contractor (and its subcontractor) will conduct the interviews. All staff who will conduct the interviews are experienced qualitative interviewers who have contributed to other aspects of the project, including the development of the interview protocols.
The specific information being sought through this collection can be obtained only through in-depth interviews due to the in-depth probing—which require training and active interviewer involvement—that is critical to obtaining information that will inform our understanding of performance measurement data quality and collection and reporting processes. However, before the interviews, the contractor will request that respondents make as much information as possible available electronically (for example, from websites and data files from ED program office staff) in order to reduce the time burden of the interviews. In addition, if the respondents agree, we will make digital recordings of all interviews to facilitate accurate capture of the conversations. Using digital recorders will reduce the need for the contractor to follow up with respondents after the initial interview in order to clarify what they stated when answering interview questions.
The information to be collected for this study does not currently exist in a systematic format. Efforts are being made to collect all available information from the program offices to avoid requesting information that respondents have already provided to ED.
No small businesses will be involved as respondents. However, some grantees may be schools or small organizations that could be classified as small entities. We will make every effort to schedule the interviews at times convenient to all respondents, including nonbusiness hours (that is, evenings and weekends), if that is most convenient to the respondents.
This submission includes interviews with grantees and local evaluators and program office contractors conducted once during the evaluation period. Without collecting these interview data, we will not be able to determine whether (1) the data upon which programs measure performance are of high quality and (2) the methods in use to aggregate and report on those data are sound. No other available data sources can confirm this information.
There are no special circumstances associated with this data collection; our request fully complies with the guidelines in Section 1320.5(d)(2).
A 60-day notice about the study was published in the Federal Register (Vol. 76, page 39394) on July 6, 2011. Public comments have not been received yet.
No payments will be made to respondents.
We will make every effort to maintain the privacy and confidentiality of respondents in accordance with the Privacy Act (5 USC 552a), which covers the collection, maintenance, and disclosure of information from or about identifiable individuals.
All respondents included in the audit will be assured that the information they provide will be used only for the purpose of the audit and that the information obtained through this audit will be kept confidential to the extent provided by law.
To ensure data security, the contractor staff are required to and will adhere to strict standards of confidentiality as a condition of employment. All contractor staff will sign a confidentiality agreement that contains the following stipulations (see Appendix C for the agreement form):
I will not reveal the name, address, or other identifying information about any respondent to any person other than staff of Decision Information Resources, Inc. and Mathematica Policy Research, Inc. who are directly connected to the study.
I will not reveal the contents or substance of the responses of any identifiable respondent or informant to any person other than a member of the project staff, except for a purpose authorized by the project director or authorized designate.
I will not contact any respondent or informant except as authorized by a member of the project staff.
I will not release a dataset or findings from this project (including for unrestricted public use or for other unrestricted uses) except in accordance with policies and procedures established by the project director or authorized designate.
Additionally, no data that identifies the respondent will be shared with the program office or any other ED staff. Respondents will be randomly assigned a 3-digit ID for completion of the interview. Data will be shared outside of the immediate study team only through a separate restricted-use data file constructed by DIR. Only the randomly assigned ID will remain in the restricted-use data file; all other grantee identifiers, such as the ED grant number, will be removed in the construction of the file.
All analysis of grantee information will be aggregated to the project level. No results will be provided outside of the immediate study team that are specific to a grantee within a given program.
After the information has been analyzed and final reports developed, all data files will become the property of ED. Data will subsequently be destroyed in accordance with the rules and regulations specified by OMB.
The questions included on the data-collection instruments for this study do not involve sensitive topics. No personal information is requested.
Table 3 presents our estimates of the reporting burden for the sample of grantee and contractor respondents that we will interview for each program. This is a one-time data-collection effort. The estimates given in Table 3 reflect the total hour burden for this collection. Time estimates are based on interviews we conducted with three budget office staff, which used a protocol similar to the program office and grantee protocol. We will pilot test the grantee local evaluator protocol with three grantees in order to gauge the accuracy of our burden estimates.
Table 3. Estimates of Grantee and Contractor Total Burden
Grant Program |
Number of Grantees or Contractors to Interviewa |
Number of Respondents Per Grantee or Contractor |
Average Time Per Respondent (Hours) |
Total Respondent Burden (Hours) |
Estimated Hourly Wage (Dollars)b |
Estimated Total Burden Across all Respondents (Dollars) |
|
Title III National Professional Development Grants |
102 |
3 |
1 |
306 |
$32.65 |
$9991 |
|
ELA State Grants |
52 |
3 |
1 |
156 |
$32.65 |
$5,093 |
|
Equity Assistance Centers |
10 |
3 |
1 |
30 |
$32.65 |
$980 |
|
ELA Native American Alaska Native Children in School Program |
9 |
3 |
1 |
27 |
$32.65 |
$882 |
|
Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) |
136 |
3 |
1 |
408 |
$32.65 |
$13,321 |
|
Perkins Title I – Basic State Grant |
53 |
3 |
1 |
159 |
$32.65 |
$5,191 |
|
OSERS Part B – State Grants |
59 |
3 |
1 |
177 |
$32.65 |
$5,779 |
|
OSERS Part C– Infants and Toddlers |
59 |
3 |
1 |
177 |
$32.65 |
$5,779 |
|
Program Office Contractors |
11 |
2 |
.75 |
16.5 |
$32.65 |
$538 |
|
Totals |
491 |
|
|
1456.5 |
|
Notes:
No annualized capital-startup costs or ongoing operation and maintenance costs are associated with collecting the information. In addition to their time, which is estimated in Table 3, there are no other direct monetary costs to respondents.
The total cost to the federal government for the Program Performance Data Audits project is $2,417,761. The average annual cost is $805,920 in FY 2010; $805,920 in FY 2011; and $805,920 in FY 2012. Included in the total is approximately $393,232 (in FY 2012) to be used for the data-collection activities for which clearance is currently being requested.
This is considered a program change because this is a new collection.
We have developed tentative schedules for developing instruments, conducting interviews, and drafting reports.
The schedule shown in Table 4 displays the sequence of activities required to develop and conduct the grantee, local evaluator, and contractor interviews and includes key past and future dates for activities related to instrument design, data collection, analysis, and reporting.
Deliverable
|
Due Date |
Task 4—Conduct Interviews with Program Office Staff and Other Relevant Department Staff
|
|
Draft program office and budget analyst interview protocols |
4/15/2010 |
Final program office and budget analyst interview protocols |
4/22/2011 |
Complete budget analysts interviews |
4/15/2011 |
Complete program staff interviews |
5/27/2011 |
Complete 8 contractor interviews |
8/30/2011 |
Complete additional contractors |
11/30/2011 |
Program office/budget analyst interview updates |
At least once every two weeks during the time period when the interviews are being conducted |
Task 5—Develop Grantee Interview Protocol
|
|
Draft grantee/local evaluator interview protocol |
3/21/2011 |
Final grantee/local evaluator interview protocol |
4/29/2011 |
|
|
|
|
Complete 61 (or 25% of) grantee interviews for 6 programs |
1/30/2012 |
Complete 122 (or 50% of) grantee interviews for 6 programs |
2/28/2012 |
Complete 195 (or 80% of) grantee interviews for 6 programs |
3/31/2012 |
Task 11 – Report Findings
|
|
Program Performance Data Audit briefing materials and briefing |
5/30/2012 |
Draft Program Performance Data Audit Final Report |
7/15/2012 |
Local Evaluation Audit briefing materials and briefing |
5/30/2012 |
Draft Local Evaluation Audit Final Report |
7/15/2012 |
Task 13 - Conduct Program Performance Data Audits for an Additional Cohort of Programs (OPTIONAL TASK) |
|
1. Obtain all cohort 2 documents |
5/13/2011 |
2. Complete review of cohort 2 documents |
5/27/2011 |
3. Refine performance reporting criteria |
6/16/2011 |
4. Draft budget and program protocols |
4/15/2011 |
4. Final budget and program protocols |
4/22/2011 |
4. Complete budget analyst interviews |
6/29/2011 |
4. Complete program office interviews |
8/7/2011 |
4. Complete additional contractor interviews |
11/30/2011 |
5. Revise final sampling strategy memo |
12/7/2010 |
6. Conduct grantee and local evaluator interviews |
4/30/2012 |
7. Obtain data for 5 programs |
8/31/2011 |
8. Complete data entry checks for 3 of 5 programs |
9/30/2011 |
8. Complete data entry checks for remaining 2 of 5 programs |
10/15/2011 |
9. Data entry checks memo for 3 programs |
10/30/2011 |
9. Data entry checks memo for 2 programs |
11/15/2011 |
9. Final data entry checks memo for all programs |
12/15/2011 |
10. Complete data aggregation check |
12/30/2011 |
11. Aggregation check memo for 3 programs |
1/15/2012 |
11. Aggregation check memo for remaining 2 programs |
2/15/2012 |
11. Final aggregation check memo for all programs |
3/1/2012 |
12. Obtain evaluation reports |
2/28/2012 |
13. Complete evaluation quality assessments |
4/30/2012 |
14. Revised program performance report draft #2 |
10/15/2012 |
15. Revised evaluation audits report draft #2 |
10/15/2012 |
16. Prepare and submit drafts of Combined Findings final report |
11/30/2012 |
17. Submit final data file |
12/30/2012 |
|
|
We will analyze data from all of the data sources, including the grantee and local evaluator interviews and the program office contractor interviews covered by this OMB request, to help assess whether (1) the programs’ performance data are of high quality and the methods used to aggregate and report on those data are sound; and (2) whether the local evaluations conducted by grantees (or their local evaluators) yield useful information. Our analysis will also identify the most common patterns, variations, opportunities, and challenges associated with the data flow sequence for collecting and reporting program performance and evaluation data The results from the analysis of the interview data will augment information obtained from other study activities such as (1) interviewing budget and program office staff, (2) verifying the entry and aggregation of program performance data with the contractor, and (3) reviewing evaluation reports.
Initially, program performance data will be analyzed to report respondents’ perceptions of the level of risk (potential sources of error or threats to data quality) in each of the following categories:
Data element specifications, calculation, and design
Grantee reporting, including tests for accuracy
Program office processing of reported data
Program office analysis and dissemination of performance data
Data derived from the grantee and local evaluator protocol will be tabulated to provide descriptive frequencies of all responses and used to understand grantee perceptions of (1) the guidance that they received from ED, (2) any technical assistance that they received, and (3) the processes and procedures for collecting, calculating, reporting, and using the program performance data that they are required to submit. By analyzing interview data, we can learn about grantees’ specific issues and their perspectives on each program’s efforts to standardize and give instruction for performance reporting systems.
The identification of common patterns and notable variations will rely primarily on descriptive techniques. Using data obtained from the protocols included in this request and information obtained from others sources indicated above (see Table 1) we will produce descriptive information that will allow us to identify program performance data activities that are conducted by most grantees as well as those that appear to be unique to a given program. Examples of the type of descriptive information that would be produced from these analyses are given in Tables 5 to 9 below.
Table 5. Number of Grantees Receiving Submission Guidance for the Collection and Submission of Performance Data by Program Performance Reporting Method
|
Number of Grantees Receiving Submission Guidance (N=604) |
|||
Type of Reporting Method |
GPRA Only |
Non-GPRA Only |
Both GPRA and Non-GPRA |
No Guidance Received |
|
|
|
|
|
524B Form |
|
|
|
|
State Electronic System |
|
|
|
|
Other Method |
|
|
|
|
|
|
|
|
|
Table 6. Number of Grantees Receiving Training for the Collection and Submission of Performance Data by Program Performance Reporting Method
|
Number of Grantees Receiving Training (N=604) |
|||
Type of Reporting Method |
GPRA Only |
Non-GPRA Only |
Both GPRA and Non-GPRA |
No Training Received |
|
|
|
|
|
524B Form |
|
|
|
|
State Electronic System |
|
|
|
|
Other Method |
|
|
|
|
|
|
|
|
|
Table 7. Number of Grantees Encountering Problems with Reporting Accurate Program Performance Data by Program Performance Reporting Method
|
Number of Grantees Encountering Problems (N=604) |
|||
Type of Reporting Method |
GPRA Only |
Non-GPRA Only |
Both GPRA and Non-GPRA |
No Problems Encountered |
|
|
|
|
|
524B Form |
|
|
|
|
State Electronic System |
|
|
|
|
Other Method |
|
|
|
|
|
|
|
|
|
Table 8. Average Data Entry Error Rate by Type of Grantee Data Quality Control Activity
|
Average Error Rate (N=604) |
Type of Grantee Data Quality Activity |
|
|
|
Data Quality Checks Only |
|
Data Validation Only |
|
Both Data Quality Checks and Data Validation |
|
Neither Data Quality Checks or Data Validation |
|
|
|
Table 9. Number of Grantees Who Used Program Performance Results by Type of Use and Type of Reporting Method
|
Type of Reporting Method (N=604) |
||
Use of Program Performance Results |
524B Form |
State Electronic System |
Other Method |
|
|
|
|
Administrative Use Only |
|
|
|
Program Assessment Only |
|
|
|
Both Administrative and Program Assessment |
|
|
|
Neither Administrative or Program Assessment |
|
|
|
In addition to examining the data for all grantees (N=604) and identifying common patterns across all eleven ED programs, we will produce similar tables for each of the eleven programs. Information such as this will help inform our understanding of how programs using similar reporting methods (such as those that use the 524B reporting system) may deviate in regards to the data flow sequence. Tables 10 and 11 provide examples.
Table 10. Percent of Grantees Receiving Submission Guidance for the Collection and Submission of Performance Data Using the 524B Submission Method by Program and Type of Guidance
|
Percent Receiving Guidance (N= 280) |
|||||
Type of Guidance |
Program 1 |
Program 2 |
Program 3 |
Program 4 |
Program 5 |
Program 6 |
GPRA Only |
|
|
|
|
|
|
Non-GPRA Only |
|
|
|
|
|
|
Both GPRA and Non-GPRA |
|
|
|
|
|
|
No Guidance Received |
|
|
|
|
|
|
Table 11. Average Data Entry Error Rate Using the State Electronic Reporting System by Program and Type of Grantee Data Quality Activity
|
Average Error Rate (N= 276) |
||||
Type of Grantee Data Quality Activity |
Program 1 |
Program 2 |
Program 3 |
Program 4 |
Program 5 |
|
|
|
|
|
|
Data Quality Checks Only |
|
|
|
|
|
Data Validation Only |
|
|
|
|
|
Both Data Quality Checks and Data Validation |
|
|
|
|
|
Neither Data Quality Checks or Data Validation |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The Office of Planning, Evaluation and Policy Development is not requesting a waiver for the display of the OMB approval number and expiration date on the data-collection instruments. All data-collection instruments will display the expiration date for OMB approval.
This submission does not require an exception to the Certificate for Paperwork Reduction Act 5 CFR 1320.9.
The Elementary and Secondary Education Act, as reauthorized by the No Child Left Behind Act (NCLBA) of 2001, Title V, Part D, section 5411(a) authorizes support for activities to improve the quality of elementary and secondary education programs. The Program Performance Data Audits project is an allowable activity according to that section and is being conducted to ensure that high-quality data are available to inform decisions about programs designed to improve the quality of elementary and secondary education. Work conducted under this contract will be monitored by the Office of Planning, Evaluation and Policy Development.
In addition to the NCLBA, the Government Performance Results Act (GPRA) of 1993 requires that all agencies develop performance plans for every program in an agency’s budget (section 1115). These plans must express performance goals in an objective and measurable form. The GPRA also requires performance reports from every program in each agency budget (section 1116). These reports must cover three years of performance results by the year 2002 and report on whether a program has met the performance goals in its performance plan. In addition, each goal that is not met by a program must have an explanation and a plan for improvement as part of the program performance report.
Other legislation, in addition to NCLBA, Title V, Part D, section 5411(a), that authorizes the programs we are auditing include:
Civil Rights Act, Title IV (Equity Assistance Centers)
Elementary and Secondary Education Act, Title III, section 3001 (National Professional Development Grants and Native American Alaska Native Children in School programs)
Elementary and Secondary Education Act, Title V, Part B, Subpart 3; 20 U.S.C. 7225-7225g (Voluntary Public School Choice)
Elementary and Secondary Education Act, Title I, Part B, Subpart 4, Sec. 1251; 20 U.S.C. 6383 (Improving Literacy Through School Libraries).
Elementary and Secondary Education Act, Title III, Secs. 3111–3141; 20 U.S.C. 6821–6871 (English Language Acquisition State Grants).
This appendix contains a description of how the protocol will be administered and the areas of assessment that will be covered.
Part 1 will identify your knowledge about and/or the role(s) you have played in eight tasks (listed below) associated with the data flow sequence of the 2008–2009 GPRA and non-GPRA performance measures, including evaluation data.
A. Derivation, analysis, and reporting of GPRA and non-GPRA data |
E. Grantee collection and submission of data |
B. Provision of guidance regarding submission of performance data |
F. Data quality checks and validation of grantee data |
C. Provision of training |
G. Aggregation of grantee data |
D. Provision of technical assistance |
H. Dissemination and use of program performance results |
First, we will define each of the eight tasks and ask if you or a contractor have had any knowledge about or role in these eight tasks. If you indicate that you did have knowledge about or a role in a particular task(s), we will ask you questions in a specific module(s) for that task(s) in Part 2. If you indicate that you did not have knowledge about or a role in a particular task(s), we will ask you: 1) who we should talk to in order to gain insight into how this task(s) was performed and 2) how we can contact this person.
In addition, we will ask if you had knowledge about or were involved with any other tasks that were not included in tasks A through H. If you indicate “yes”, we will ask you to describe this other task in detail so that we can ensure that it is not already covered by questions in one of the eight modules in Part 2. If it is determined that the other described task is legitimately different from tasks currently listed, then we will ask you to share your knowledge and role as it relates to this unlisted task.
A1. Description of each GPRA and non-GPRA measure your organization collected
A2. Process for determining what GPRA and non-GPRA performance data your organization collected
A3. Evaluation plan for GPRA and non-GPRA data
A4. Analysis of GPRA and non-GPRA data
A5. Use of GPRA and non-GPRA results
A6. Development of written report and who received it
A7. Changes in GPRA and non-GPRA data that your organization collected since the 2008–2009 reporting cycle
The provision of guidance regarding submission of GPRA and non-GPRA performance measures, including evaluation measures, is defined as the process by which instructions, policy information, and definitions were provided to grantees on how to submit performance data for the 2008–2009 reporting cycle. We will discuss:
B1. Description of guidance
B2. Methods by which guidance was provided
B3. People within your organization who received the guidance
B4. When guidance was provided during the 2008–2009 program cycle
B5. Problems interpreting the 2008–2009 performance measure guidance provided by the program office
B6. Usefulness of guidance provided by program office
B7. Changes in the types of guidance provided by the program office or the methods by which they were provided since the 2008–2009 reporting cycle
Provision of training is defined as training events, classes, or materials that were designed to help grantees with their reporting of GPRA and non-GPRA program performance data, including evaluation data, for the 2008–2009 reporting cycle. We will discuss:
C1. Description of trainings provided
C2. Primary mode used to provide the training (conferences, webinars, etc.)
C3. When trainings occurred during the 2008–2009 data reporting cycle
C4. Who within the grantee organizations received the training
C5. Usefulness of training
C6. Changes in the type of training(s) provided or the procedures by which training(s) were implemented or provided since the 2008–2009 reporting cycle
Provision of technical assistance (TA) is defined as any materials or activities that were used to assist grantees in their efforts to report GPRA and non-GPRA program performance data, including evaluation data, during the 2008–2009 reporting cycle. TA differs from training in that it occurs when grantees seek assistance about specific program reporting issues that they may be experiencing, and that it is usually provided to one grantee at a time. We will discuss:
D1. Description of TA provided by the program office
D2. Primary mode used to provide the TA
D3. When the TA occurred during the 2008–2009 data reporting cycle
D4. Who within the grantee organization received the TA
D5. Usefulness of TA provided by the program office
D6. Changes in the type of TA provided or the procedures by which TA was implemented or provided since the 2008–2009 data reporting cycle
Grantee collection and submission of data is defined as the processes that grantees used to collect and report GPRA and non-GPRA program performance data, including evaluation data, to the program office or its contractor during the 2008–2009 data reporting cycle. We will discuss:
E1. Processes grantees used to collect data to be used for the calculation and submission of 2008–2009 performance data
E2. Processes grantees used to submit performance data to the program office
E3. Availability—prior to report deadline—of data your organization needed to report on 2008–2009 performance
E4. Extent of time it took to review 2008–2009 performance results and to correct any problems before submitting them
E5. Whether your organization submitted performance results on time
E6. Problems encountered in collecting data used for the performance reports for the 2008–2009 reporting cycle
E7. Problems encountered in calculating data used for the performance reports for the 2008–2009 reporting cycle
E8. Problems encountered in reporting accurate performance results for the 2008–2009 reporting cycle
E9. Feedback provided by the program office with regard to 2008–2009 collection and submission process
E10. Changes in the procedures to collect and submit data since the 2008–2009 reporting cycle
Data quality checks and validation of grantee data are defined as any procedures—during the 2008–2009 reporting cycle—that the program office or its contractor had in place to check that the data they received: 1) was of sufficient quality to calculate all the performance measures and 2) accurately captured the program performance measures required for grantees. We will discuss:
F1. Who assessed the quality and consistency of your 2008–2009 performance data
F2. Challenges with regard to data edits, data cleaning, or any other automated processes used to assess the accuracy of your 2008–2009 performance data
F3. Challenges with regard to the validation of participant level data that was used to calculate and report 2008–2009 performance data
F4. Changes in the procedures or activities used to assess the reliability or validity of the performance data since the 2008–2009 reporting cycle
Aggregation of grantee data is defined as any procedures the program office or its contractor used during the 2008–2009 reporting cycle to combine performance results, including evaluation results, from all grantees into a single result for the entire program. We will discuss the following topic areas:
G1. Description of procedures used to aggregate grantee data to the program level
G2. The extent to which the aggregated performance results provided the information that the program office needed to assess 2008–2009 program performance
G3. Description of how aggregation procedures were verified or tested for the 2008–2009 results
G4. When aggregation processes occurred
G5. Problems the program office experienced with regard to the aggregation of grantees’ 2008–2009 data
G6. Changes to the aggregation procedures (or timing and testing of them) since the 2008–2009 reporting cycle
Dissemination and use of program performance results are defined as the process by which the program office or its contractor distributed the performance results of the program for the 2008–2009 reporting cycle. This includes the policy context and background information that were provided along with program results. We will discuss:
H1. Program office contacts regarding your reported 2008–2009 performance results
H2. Receipt of aggregated 2008–2009 performance reports for your program
H3. Use of the analysis of the 2008–2009 performance data
H4. Changes in the use of the program office’s or your organization’s performance results since 2008–2009 reporting cycle
H5. Changes in the procedures or activities associated with generating performance results since the 2008–2009 reporting cycle
Other tasks are defined as tasks—associated with the 2008–2009 performance measures—that involved activities and processes not described in the eight data tasks we previously defined. We will discuss:
I1. Person(s) responsible for conducting task
I2. Description of activities to conduct this task
I3. When this task occurred during the 2008–2009 data reporting cycle
I4. Problems experienced with regard to conducting this task for the 2008–2009 performance reporting cycle
I5. Changes associated with conducting this task since the 2008–2009 performance reporting cycle
1. Do you have any questions that you would like to ask us?
2. Do you have any other suggestions or ideas to add that may inform us or lead to improvements in the flow sequence of performance measures and evaluation data?
I, ________________________________, understand and agree to the following terms of this Agreement in consideration of my being granted access to certain U.S. Department of Education (ED) information and information systems—which contain certain sensitive but unclassified information—in order to carry out my duties as an employee of Decision Information Resources under Contract No. ED-04-CO-0049 PPDA.
I will not reveal the name, address, or other identifying information about any respondent to any person other than staff of Decision Information Resources, Inc. and Mathematica Policy Research, Inc., who are those directly connected to the study.
I will not reveal the contents or substance of the responses of any identifiable respondent or informant to any person other than a member of the project staff, except for a purpose authorized by the project director or authorized designate.
I will not contact any respondent or informant except as authorized by the project director or authorized designate.
I will not release a dataset or findings from this project (including for unrestricted public use or for other unrestricted uses), except in accordance with policies and procedures established by the project director or authorized designate.
Signature __________________________________________________
Date ______________________________________________________
1 Elementary and Secondary Education Act, as reauthorized by the No Child Left Behind Act of 2001, Title V, Part D, section 5411(a).
2 Performance Measurement and Evaluation: Definitions and Relationships. GAO-05-739SP, U.S. Government Accountability Office. May 2005.
3 Newcomer, Kathryn (1997). “Using Performance Measurement to Improve Programs” in Using Performance Measurement to Improve Public and Nonprofit Programs, ed. Kathryn E. Newcomer. New Directions for Evaluation. 75, p.7.
4 “Preparation, Submission and Execution of the Budget,” Circular No. A-11, Office of Management and Budget, 2008, Retrieved from http://www.whitehouse.gov/omb/assets/a11_current_year/a_11_2008.pdf.
5 A data aggregation system can come in many forms, ranging from one or more spreadsheets to a fully automated reporting system that integrates reporting, aggregation, and data analysis for management.
6 These methods will come from data-element and report specification documents or from interviews with the program office staff and, where applicable, their contractors.
7 http://ies.ed.gov/ncee/wwc/pdf/wwc_version1_standards.pdf.
8 Government Performance and Results Act data, which is defined as data that grantees are required to submit to the program office.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | jdiamandopoulos |
File Modified | 0000-00-00 |
File Created | 2021-01-31 |