Appendix B PREIS_Impact Report Guidance

Appendix B PREIS_Impact Report Guidance.docx

Formative Data Collections for ACF Program Support

Appendix B PREIS_Impact Report Guidance

OMB: 0970-0531

Document [docx]
Download: docx | pdf

PREIS impact Report Guidance

A key goal of the Family and Youth Services Bureau (FYSB) Personal Responsibility Education Innovative Strategies (PREIS) Program is to ensure that rigorous evidence on program effectiveness contributes to the knowledge base on adolescent pregnancy prevention.

This document guides you on structuring a final report to be comprehensive and accessible. The goal of the final report is to document your evaluation and share the findings to a public audience. In hopes of saving time when completing this template, many of the report sections draw directly from the material prepared for the evaluation abstract, and Impact and Program Implementation Evaluation Analysis Plans. You can use text from those sources to simplify report writing. Here, we provide an annotated outline that contains the following guidance for each section: (1) “Purpose” describes the purpose of the section and what you should discuss, (2) “Instructions and reminders” discusses things for you to keep in mind when writing the section, (3) “Potential resources” lists existing documents that might be potential sources for you to write the section, and (4) “Non-text elements” guides whether non-text elements (tables or figures) could be useful.

Write your report using the accompanying template, which we set up to make the production and 508-compliance process easier and facilitate review by FYSB and the RETA team.1 An attached Word file (PREIS_Impact Report Template.docx) provides an outline of the report with places for you to fill in each section. When filling out the template, please use complete sentences. A separate attached table shell file (PREIS_Impact Report Tables.docx) provides some required and optional table shells for you to use and paste into the outline file as you write the report. Using these shells will enable FYSB to more quickly make the reports 508 compliant. Both files are also saved on SharePoint in the Templates folder.

Here are some additional suggestions for your impact report:

  • Organize the final report so that it is about 30 to 40 double-spaced pages (15 to 20 single-spaced pages), excluding tables, figures, references, and appendices.

  • Write the report for all audiences, not just other researchers. Write as if the audience has not been involved in the grant and knows nothing about the program or the evaluation. The report should provide enough detail for readers to understand the program and its evaluation, and it should be free of project- or program-specific jargon and abbreviations.

  • Reach out to your RETA with questions about the report guidance or your approach as you begin to work on your report. Getting questions resolved early in the process will simplify the review process at the end of the grant period.

  • Please submit a report that you believe is ready for external review, not one to be read as a draft. It will have ideally been edited and read by multiple people before you submit it to minimize the amount of editorial comments your Federal Project Officer (FPO) and RETA will have to provide. Their goal is to focus on content and technical details rather than formatting and editorial changes.

Please email your final report as Word files (not PDF) to your FPO and copy your RETA liaison by [due date]. For consistency, please use this common naming convention when submitting your report: [Grantee Name] Impact Report_ [report draft date]. Your FPO and RETA liaison will review the final report, provide comments and suggested edits, and return it to you for revisions. Your FPO must approve your final report by the end of your grant period.

Cover Page

The cover page for your report should include the title of the report and all authors.

Disclose any conflicts of interest—financial or otherwise—on the cover page. For an example of how to identify a conflict of interest, please see the website of the International Committee of Medical Journal Editors. For example, if someone on the evaluation team received a grant from an organization included in the evaluation, you could include language such as “[insert name] reports receipt of a grant from [organization name] during the evaluation period.” Please note that an evaluation team that is not completely independent from the program team (that is, if they are not different organizations with completely separate leadership and oversight) is a conflict of interest you must document.

Finally, the cover page should include the attribution to FYSB:

This publication was prepared under Grant Number [Insert Grant Number] from the Family and Youth Services Bureau within the Administration for Children and Families (ACF), U.S. Department of Health and Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the policies of HHS, ACF, or the Family and Youth Services Bureau.



Evaluation abstract

Purpose

Provide a one- to two-page executive summary of the final report.

Instructions and reminders

Please complete the abstract in the outline template. You can copy and paste information for most of the fields from your most recently approved abstract, although you should remember to update the abstract if anything has changed since its approval. There are some additional fields to include, specifically the Methods and Findings sections. You can find your most recently approved evaluation abstract on SharePoint in your Abstract folder.

Potential sources

  • Evaluation abstract (first submitted in fall 2018, potentially resubmitted with the Impact and Program Implementation Evaluation Analysis Plan)

Non-text elements

None

I. Introduction

A. Introduction and study overview

Purpose

Orient the reader to the study.

Instructions and reminders

In this section, explain (1) the need for teen pregnancy prevention for the particular population (defined by locality, age, gender, race/ethnicity, etc.) studied; (2) the rationale for selecting your program and how the evaluation fits within the FYSB PREIS grant program; (3) previous research describing the effects of the program, including, if applicable, how prior findings were assessed by the Evidence Review; and (4) that this report describes the implementation and impact of the PREIS-funded program. The reader should understand why the program was targeted to certain youth and the motivation for selecting the chosen program.

Potential sources

  • PREIS Funding Opportunity Announcement

  • Grant application

  • FYSB website

  • Youth.gov

Non-text elements

None

B. Primary research questions

Purpose

Articulate the key research questions about the impacts of the program on sexual behavior outcomes of youth.

Instructions and reminders

This section should present the primary research questions. Remember that the primary research questions should focus on the impact of the intervention (that is, the specific aspects of your program that you test in the evaluation) on at least one behavioral outcome measure. All primary research questions should focus on impacts at a specific time point. The behavioral outcomes and time points should clearly connect to the program’s theory of change. Your research questions should be drawn from previously approved documents (including your evaluation design plan or impact and program implementation evaluation analysis plan), although you may need to make some changes to those if anything has changed.

Potential sources

  • Evaluation design plan

  • Impact and program implementation evaluation analysis plan

Non-text elements

None

C. Secondary research questions

Purpose

Outline additional, important research questions about other potential outcomes of your program.

Instructions and reminders

This section should present the secondary research questions. Secondary research questions would include explorations of other outcomes that the intervention might influence or other justifiable explorations of program effectiveness (for example, whether the program works better for certain populations). These research questions might focus on one of several possible types of outcomes:

  1. Impacts on outcomes considered pre-cursors to the evaluation’s primary behavioral outcomes, such as self-efficacy for using condoms, or knowledge about sexually transmitted infections. These might be precursors to behavioral outcomes such as condom use during recent sexual activity, in that youth with higher self-efficacy and more knowledge might be more likely to use condoms in practice.

  2. Impacts on other behavioral outcomes not considered to be the primary, intended outcomes of the program, such as, substance use, impulsive behavior, or school attendance

  3. Impacts for specific subgroups of people, such as for female participants or youth who had not had sex before enrolling in the study

  4. Impacts on primary research question behavioral outcomes at different time points, such as immediately after the end of the program or six months later

  5. Impacts on outcomes related to your Adulthood Preparation Subjects (APS) program content, not covered by the above, such as financial literacy

  6. Non-experimental exploration of how the program’s core components influence adolescents’ outcomes, such as dosage analyses or analysis of outcomes related to session-specific content.

Instructions and reminders (Continued)

As with the primary research questions, all the secondary research questions should focus on outcomes at a specific time point and should align with the program’s theory of change. For instance, if your theory of change posits that knowledge should change immediately following the end of the program, you should examine the impact on knowledge outcomes at your immediate post-program survey. You may expect to see impacts on some outcomes (like substance use or impulsive behavior) over a longer period of time, so you would want to examine those outcomes at the time of the short or long-term follow-ups.

NOTE: If your analysis plan included additional analyses beyond impact analyses, you can include them in a section I.D.: Additional analyses.

Grantees might also have research questions for the implementation study (for example, questions about adherence to the program model or youth attendance). These research questions should go in the implementation evaluation section (section IV below) and not this section. You can add a general statement about your implementation questions (for example, that you also assessed program dosage and quality to understand program impacts) and then refer readers to section III.C.1 for details.

Potential sources

  • Evaluation design plan

  • Impact and program implementation evaluation analysis plan

Non-text elements

None

II. Programming for treatment and comparison groups

Provide a one- or two-paragraph overview to this section highlighting that it will summarize the intended program and comparison conditions and that section V.A will describe what each group actually received.

A. Description of program as intended

Purpose

Summarize the program being tested.

Instructions and reminders

This section should describe the intervention condition as intended (or what the intervention group was supposed to receive). The implementation results section (Section V.A) should discuss what the intervention group actually received in this study.

Discuss (1) program activities; (2) program content; (3) expectations for implementation, including the implementation location or setting (for example, schools or clinics), duration and dosage, and staffing; and (4) theory of change or logic model for the program, including the expected outcomes. Include a discussion of the APS your program covered, how you intended to covered the, and how you selected them to be a focus of your program. A graphical representation of your logic model or theory of change is required as Appendix A in the report.

Potential sources

  • Evaluation abstract

  • Evaluation design plan

  • Grant application (for logic model, conceptual model, or theory of change models)

Non-text elements

Readers often find it useful when you present information about the program description in a table (for example, with rows for target population, program activities, program content, planned mode of delivery, and so on). Your logic model must be included in Appendix A. Be sure the logic model includes the APS content and outcomes.

B. Description of comparison condition as intended

Purpose

Summarize the counterfactual being examined.

Instructions and reminders

This section should describe the comparison condition as intended (or what the comparison group was supposed to receive). The implementation results section (Section V.A) should discuss the actual comparison group experience for the study.

If the comparison group was intended to receive services as usual, briefly describe what services or programs were available to youth in your service area that you expected youth in the comparison group might access.

If the comparison group received a defined program, describe what the program offered and how it was intended to be implemented. The description of the condition should include the elements listed previously for the program description but does not need to discuss theory of change.

If the intervention and comparison conditions share common elements, please highlight the similarities and differences in services received in the text, and consider adding a table to help the reader compare the content.

Potential sources

  • Evaluation abstract

  • Evaluation design plan

Non-text elements

Table differentiating the intervention and comparison conditions (if applicable)

III. Impact evaluation design

Provide a one- or two-paragraph overview to this section highlighting that it will summarize the evaluation design, sample, data collection, and analytic methods for the impact evaluation.

A. Identification and recruitment of the study participants

Purpose

Describe the study setting, context, and eligibility criteria.

Instructions and reminders

This section should describe the study’s setting and context, the study’s target population, and how you identified and recruited the sample for the study (both youth and clusters, as appropriate), including any eligibility criteria. Include information on the success of the recruitment process—that is, the number of primary units (for example, schools) contacted and number recruited. This discussion should focus on the eligible and enrolled sample (for example, the 3,000 youth enrolled in 9th grade health classes in two school districts). Section III.E will focus on the final analytic samples.

Potential sources

  • Impact and program implementation evaluation analysis plan

  • Evaluation abstract

  • Final CONSORT diagram

Non-text elements

None

B. Research design

Purpose

Summarize the research design used to assess the effectiveness of the intervention.

Instructions and reminders

This section should clearly identify the design you used to test program effectiveness. Is it a randomized controlled trial (RCT)? If so, did it involve individual or cluster random assignment? Is it a quasi-experimental design (QED)? How did you form the study groups?

Please indicate any limitations of the design or how it was implemented. For example, were there concerns about knowledge of study assignment during the consent process (if consent was gathered after random assignment in a clustered RCT) or concerns that different types of schools volunteered for intervention versus comparison in a QED? Consider addressing relevant issues that you have discussed with your FPO and RETA over the course of the study.

Instructions and reminders

(Continued)

Note on using matching: Following best practice in the field, if your study design is an RCT with high attrition at the unit of assignment, and there is a lack of baseline equivalence in the analytic sample, you should use a matching analysis to create a more rigorous comparison group. In this scenario, you should discuss the fact that the study was originally an RCT, but, because of high attrition and lack of baseline equivalence (which you will document in an appendix), the study had to construct matched comparison groups. If you need assistance with attrition calculations or determining whether you should use matching, please reach out to your RETA.

Similarly, if the study is a QED with a lack of baseline equivalence in the analytic sample, the study should use a matching analysis to better account for the baseline differences in the groups and create a more rigorous comparison group.

In these scenarios, present the matching analysis as the main or benchmark analysis in the report, with additional specifications as sensitivity analyses. Include any additional details regarding the matching analysis or intent-to-treat specification in an appendix.

Potential sources

  • Impact and program implementation evaluation analysis plan

  • Evaluation abstract

  • Matching brief

Non-text elements

None

C. Data collection

Purpose

Indicate how data on outcomes of interest as well as key explanatory variables, including baseline assessments and demographics were obtained from sample members

Instructions and reminders

Describe the data collections conducted, including timing, mode of administration, and overall process, and include information on incentives.

Provide details on differences between intervention and comparison conditions in timing, mode, and processes used for data collection, as appropriate.

Potential sources

  • Evaluation design plan

  • Evaluation abstract

Non-text elements

It may be useful to present a table to summarize the information.

D. Measures

Purpose

Describe how the outcomes of interest in the primary and secondary research questions were operationalized using survey data or other data elements.

Instructions and reminders

Define the outcomes that the primary and secondary research questions examine. Briefly explain how you operationalized and constructed each outcome measure. If you constructed a measure from multiple items, please document the source items and explain how you did so to create an outcome for analysis. If a detailed description of measure construction is necessary, please include that information in an appendix.

In this section, only present information on the outcomes that are the focus of the primary or secondary research questions.

Potential sources

  • Impact and program implementation evaluation analysis plan

  • Evaluation abstract

Non-text elements

Please present this information in a table. Please refer to the table shells document, which includes 508-compliant (landscape) table shells for Tables III.1 (primary research questions) and III.2 (secondary research questions). Do not include Table III.2 if there are no secondary research questions (Note: in the template, Table III.1 includes examples for you).

Instructions for completing Tables III.1 and III.2

  • The purpose of these tables is to enable the reader to understand the outcome measures analyzed for the primary or secondary research questions.

  • Each row should document each outcome to be analyzed at a specific survey time point. If you are assessing an outcome at multiple points in time, you should include one row for the outcome for each time point. For instance, if you are assessing ever had sexual intercourse at the short-term follow-up (6 months after the program ends) and at the long-term follow-up (12 months after the program ends) as your primary research questions, you should enter two rows for this outcome in Table III.1 (one row for the short term follow-up and one row for the long-term follow-up). If you are looking at outcomes at one time point in your primary research questions and a different time point in your secondary research questions, you would include the measures in both Tables III.1 and III.2.

  • In the “Outcome measure name” column, please include the name of the outcome that will be used throughout the report in both text and tables.

  • In the “Source item(s)” column, please list the source items. For example, the questions from youth surveys used to construct the measure. If applicable, you should include that the survey items were pulled from the FSYB performance measures, including the version used.

  • In the “Constructed measure” column, please describe how you constructed each outcome variable (for example, “The variable is constructed as a dummy variable where respondents who respond yes they have had sex are coded as 1, and all others are coded as 0.”). If the outcome is a published measure or scale, please provide the name of the measure. If the outcome is a scale, please provide a Cronbach’s alpha.

  • In the “Timing of measure” column, please indicate the amount of time that has passed since the end of the program.

E. Study sample

Purpose

Describe the flow of participants into the sample.

Instructions and reminders

Use this section to describe how you created the analytic samples—that is, the flow of sample members from the point of random assignment (or consent for QEDs) through the follow-up assessments used in the primary and secondary research questions, factoring in non-consent, attrition, and item nonresponse.

For studies with cluster-level assignment, this section should include (1) the total number of clusters randomized (or consented for QEDs) and the number of youth in those clusters; (2) the number of students for whom parental consent was obtained after random assignment (if applicable); (3) the number of clusters that contributed data at baseline and follow-up (if entire clusters were lost, please document sample loss in this section); (4) the number of youth for whom baseline and follow-up data were obtained for each key measure (follow-up is the time period indicated in the research questions for assessing program effectiveness); and (5) the characteristics of the study sample (the youth and clusters), such as time period, total sample size, sample size by study group, and response rates, for the total sample and by study group.

For studies with individual-level assignment, this section should include (1) the total number of youth randomized (or consented for QEDs); (2) the number of students for whom parental consent was obtained after random assignment (if applicable); (3) the number of youth for whom baseline and follow-up data were obtained for each key measure (follow-up is the time period indicated in the research questions for assessing program effectiveness); and (4) the characteristics of the study sample, such as time period, total sample size, sample size by study group, and response rates, for the total sample and by study group.

For all studies, this section should indicate sample sizes for your analytic samples (that is, the sample(s) on which you estimate impacts). For example, suppose that the primary research question focuses on a 12-month follow-up, the secondary research question focuses on a 24-month follow-up, and recent sexual activity is the key outcome on which you are assessing program impacts. In this case, you might have two analytic samples: (1) the sample responding to the 12-month follow-up with non-missing data on recent sexual activity (primary analytic sample) and (2) the sample responding to the 24-month follow-up with non-missing data on recent sexual activity (secondary analytic sample).

If you are creating an analytic sample for a particular time point when there are multiple behavioral outcomes to be examined (with some item non-response across the outcomes), the RETA team recommends identifying a single, common analytic sample that does not have missing data across all of the outcomes of interest. Using a single, common analytic sample will produce an easy-to-follow presentation of the analyses across multiple outcome measures. If, however, there is substantial item non-response across two or more outcomes, then the RETA team recommends considering each outcome as requiring its own unique analytic sample (baseline equivalence will have to be demonstrated separately for each analytic sample).

Potential sources

  • Impact and program implementation evaluation analysis plan

  • Evaluation abstract

  • Final CONSORT diagram

  • Attrition brief

Non-text elements

As support for the discussion above, include one of the sample flow tables that follow. Use either the cluster-level (Table III.3a) or individual-level (Table III.3b) design table, whichever is appropriate for the study design. The next two pages include more detailed instructions for completing these tables.



Detailed instructions for TABLEs III.3a and III.3b

Please refer to the table shells document for two versions of 508-compliant table shells for reporting sample flow for either cluster-level (Table III.3a) or individual-level (Table III.3b) assignment designs. Complete only one table for this section. Please use the table appropriate for the study design.

Instructions for completing Table III.3a (For studies with cluster-level assignment)

  • The purpose of this table is to clearly present the sample sizes and response rates for both clusters and youth for cluster-level assignment studies.

  • Italicized text highlights how to calculate total sample sizes and response rates given other information in the table.

  • The table is split into two sections: the top section focuses on cluster sample sizes and response rates and the bottom section focuses on youth sample sizes and response rates.

  • In the column “Time period,” describe when you administered each survey relative to the end of programming. (Example text is shown in this column in the template. You should include a row for each outcome at each survey time period that you include in your primary or secondary analyses.)

  • In the columns “Total sample size,” “Intervention sample size,” and “Comparison sample size,” do the following:

  • In the “clusters” section, enter the number of clusters that were assigned to condition in the “Clusters: at beginning of study” row. In the next four rows, enter the number of clusters in which at least one youth completed the relevant survey.

  • In the “youth” section, start with the number of youth from non-attriting clusters. For all rows in this section, exclude youth in clusters that dropped (attrited) from the study. For example, if you randomly assigned 10 clusters (5 to each condition) and 1 intervention group cluster dropped from the study, you would only include youth from the 9 clusters that did not drop from the study.

  • For each row, the value entered in the “Total sample size” column should be the sum of the “Intervention sample size” and “Comparison sample size” (this calculation is shown in italics).

  • List how many youth were present in the clusters at the time of assignment in the “Youth: At time that clusters were assigned to condition” row.

  • In the row “Youth: who consented,” enter the number of youth who consented to participate. If consent occurred before assignment, delete this row, and add a note at the bottom of the table indicating that consent occurred before random assignment.

  • In the rows “Youth: Completed a baseline survey”, and “Youth: completed a follow-up survey”, then list how many youth completed the relevant surveys.

  • It is likely that missing data on the outcomes of interest or covariates due to item-level non-response will result in slightly different sample sizes than the number of youth who completed the relevant follow-up survey. In the row “Youth: Included in the impact analysis sample at follow-up (accounts for item non-response),” enter the number of youth who are ultimately included in your impact analyses. If your analytic sample sizes vary for different outcomes (because of different rates of missing data), then add a row for each outcome in each time period as needed. Indicate in a table note the outcomes to which the sample sizes apply. For example, if you have two primary outcomes (pregnancy and unsafe sex) and there were different response rates on the items needed to construct these outcomes, you should include two rows for “Youth: Included in o the impact analysis sample at follow-up (accounts for item non-response)”—one for the analysis sample for the pregnancy outcome and one for the analysis sample for the unsafe sex outcome.



  • In the columns “Total response rate,” “Intervention response rate,” and “Comparison response rate,” please conduct the calculations indicated by the italicized formula.

  • Note that for the “clusters” section, the denominator for the response rate calculations will be the numbers entered in sample size columns in the “Clusters: At beginning of study” row. For the “youth” section, the denominator for the response rate calculations will be the numbers entered in the sample size columns in the “Youth: At the time that clusters were assigned to condition” row.

  • To present findings from more than two follow-up periods, please add rows as needed.

  • If the study includes more than two conditions, please add columns as needed.

Instructions for completing Table III.3b (for studies with individual-level assignment)

  • The purpose of this table is to clearly present the sample sizes and response rates for youth in individual-level assignment studies

  • Italicized text highlights how to calculate total sample sizes and response rates given other information in the table.

  • In the column “Time period,” describe when you administered the survey relative to the end of programming (example text is shown in this column in the template).

  • In the columns “Total sample size,” “Intervention sample size,” and “Comparison sample size,” enter the number of youth who were assigned to condition in the “Assigned to condition” row. In the “Completed a baseline survey” and “Completed a follow-up survey” rows that follow, enter the number of youth who completed the relevant survey.

  • For each row, the value entered in the “Total sample size” column should be the sum of the “Intervention sample size” and “Comparison sample size” (this calculation is shown in italics).

  • In the columns “Total response rate,” “Intervention response rate,” and “Comparison response rate,” please conduct the calculations indicated by the italicized formula.

  • Note: the denominator for the response rate calculations will be the numbers entered in sample size columns in the “Assigned to condition” row.

  • It is likely that missing data on the outcomes of interest or covariates due to item-level non-response will result in slightly different sample sizes than the number of youth who completed the relevant follow-up survey. In the row “Youth: Included in the impact analysis sample at follow-up (accounts for item non-response),” enter the number of youth who are ultimately included in your impact analyses. If your analytic sample sizes vary for different outcomes (because of different rates of missing data), then add a row for each outcome in each time period as needed. Indicate in a table note the outcomes to which the sample sizes apply. For example, if you have two primary outcomes (pregnancy and unsafe sex) and there were different response rates on the items needed to construct these outcomes, you should include two rows for “Youth: Included in o the impact analysis sample at follow-up (accounts for item non-response)”—one for the analysis sample for the pregnancy outcome and one for the analysis sample for the unsafe sex outcome. To present findings from more than two follow-up periods, please add rows as needed.

  • If the study includes more than two conditions, please add columns as needed.

F. Baseline equivalence and sample characteristics

Purpose

Provide information on how you assessed baseline equivalence for the analytic samples and present the results of the assessment. Describe the sample characteristics for the reader.

Instructions and reminders

Briefly describe the methods you used to assess the equivalence of the analytic samples that served to answer the primary and secondary research questions. The analytic method used to show baseline equivalence should account for the study design (for example, clustering or stratification). Include equations for estimating equivalence of analytic samples in the appendix if necessary.

Present an equivalence table for each analytic sample (that is, the sample on which effects are estimated) used to answer the primary and secondary research questions. For example, suppose that the primary research question focuses on a 12-month follow-up assessment, the secondary research question focuses on a 24-month follow-up assessment, and recent sexual activity is the key outcome on which you are assessing program impacts. In this case, provide tables for (1) the sample responding to the 12-month follow-up with non-missing data on recent sexual activity (primary analytic sample) and (2) the sample responding to the 24-month follow-up with non-missing data on recent sexual activity (secondary analytic sample). See the note in section III.E on establishing analytic samples when item nonresponse varies substantially across outcomes of interest.

The baseline equivalence tables must include demographic characteristics (age or grade, gender, and race and ethnicity) and a measure of the outcome of interest assessed at the baseline. For each group, the table should document (1) sample sizes for each characteristic reported; (2) mean and standard deviation for continuous variables or the proportion, as a decimal, for categorical variables; (3) the difference in means (or proportions) between the two groups; and (4) the p-value of the difference between the two groups. This presentation is similar to the Excel baseline equivalence table you have completed in the past.

Instructions and reminders

(Continued)

Include a narrative description of the sample characteristics for the reader (for example, the age and gender of the sample).

Potential sources

Non-text elements

Please refer to the table shells document, which includes a 508-compliant (landscape) table shell for Table III.4 to be used to demonstrate baseline equivalence.

Instructions for completing Table III.4

  • The purpose of this table is to demonstrate equivalence between groups on key baseline characteristics.

  • Copy and paste this table shell and complete one table for each analytic sample in the report.

  • Replace the “[Survey follow-up period]” text in the header with the time point of the survey. For example, “Table III.4. Summary statistics of key baseline measures for youth completing the 6-month follow-up survey.”

  • Replace the “(Behavioral/Non-behavioral) outcome measure 1 or 2” text with the name of the outcome measures for which you assessed baseline equivalence, such as “Ever had sex” or “Recent sexual activity without a condom.”

  • Please add rows for additional outcome measures as needed. If the sample members are young and did not complete the baseline measure of the behavioral outcome, please report equivalence on the variables collected at baseline that might be correlated with outcomes (if available), like knowledge, intentions, or self-efficacy.

  • In columns 2 and 3—“Intervention proportion or mean (standard deviation)” and “Comparison proportion or mean (standard deviation)”—if the characteristic is a continuous variable, enter the mean value with the standard deviation in parentheses. If the characteristic is binary (or dichotomous), enter the percentage (that is, enter 50 percent if the proportion of the sample was .50 female). Update the headers for each column to reflect only the statistic you present (mean and standard deviation or percentage).

  • In columns 4 and 5—“Intervention versus comparison difference” and “Intervention versus comparison p-value of difference”—enter the difference and p-value for the difference. Note: The RETA team recommends conducting a regression model to assess equivalence. Doing so will ensure the presentation will be consistent with your impact estimates. Additionally, you may need to use a regression model to control for design factors (for example, stratification) and clustering, as applicable.

  • In the final row, enter the sample size in columns 2 and 3. These numbers should represent the number of youth who contribute to the impact analysis.

G. Methods

Purpose

Describe a credible and well-specified analytic approach to estimating program effectiveness. This should include a description of the methods you used to account for potential underlying differences between the intervention and comparison groups.

Instructions and reminders

Describe the analytic methods you used to answer the primary and secondary research questions. Briefly discuss the analytic method for the primary research questions. For the secondary research questions, note whether the analytic method differs from the analytic method for the primary research questions. These analytic methods should match the approved approach from your analysis plan; if you are making changes from your analysis plan, consult your RETA.

For the main impact analysis, briefly summarize (1) the analytic model, including the covariates included; (2) how you handled missing data; and, (3) if applicable, information on sample weights, multiple comparisons, and other items related to study design (for example, clustering correction or covariates for strata).

Include equations for estimating impacts in the appendix for transparency, along with any technical details not included in the body of the text.

If the study is a QED or an RCT with high attrition at the unit of assignment, we recommend your matched sample is derived using a complete case sample (that is, no imputation of missing data).

If you use different methods for your secondary research questions or additional research questions, add a subheading and then describe those methods.

Describe any details about data cleaning in the appendix. In addition, if you employed alternate approaches to handling missing data, or if you tested alternate model specifications, include that information in an appendix and reference the appendix in this section.

Potential sources

Non-text elements

None



IV. Implementation evaluation design

Purpose

Describe the research questions you investigated for the implementation study and how the data collected served to answer those questions.

Instructions and reminders

Describe the research questions guiding the implementation evaluation for each aspect of implementation examined (fidelity to the curriculum or program model, dosage of the program, quality of implementation, engagement of participants, and experiences of the comparison group and other context).

Then, for each implementation aspect, describe the data that will be used to answer each question, including the sources, and how and when you collected the data. Include research questions related to your core curriculum, as well as the APS topics your program covered.

Finally, describe what analyses you conducted. What measures did you construct for the analyses from the data collected? How did you quantify implementation elements? What methods did you use to analyze the data?

If detailed descriptions of particular implementation analyses are required, please include this information in the appendix.

Potential sources

Non-text elements

Include a table to describe data collected in the appendix, and mention the table in the main body of the report. See Appendix B and Table B.1.





V. Evaluation findings

A. Implementation evaluation findings

Purpose

Provide information on the actual experiences of youth in the program and comparison groups.

Instructions and reminders

This section should provide information on the program as youth received it (rather than the intended implementation, which you discussed in an earlier section) and the context in which it was delivered. This section should also provide information on the comparison group experience.

Write the findings concisely and ground them in numeric findings. (For example, “The program was implemented with fidelity and the program achieved its goals for attendance in this out-of-school program. In all, 95 percent of all program sessions were delivered, and 82 percent of the sample attended at least 75 percent of program sessions.”) Avoid jargon or overly technical terms as much as possible so that a reader without a research background can understand.

Use this section to tell the story of implementation, providing context for the impacts and the key lessons learned from implementation. Again, be sure to discuss the implementation findings related to APS content. We encourage the use of subheadings in the text of this section to discuss the findings related to fidelity and dosage, quality of implementation and engagement, and experiences of the comparison group and context.

Important: If any unplanned adaptations to implementation occurred during the program, you should describe these adaptations here as part of the findings of the implementation evaluation.

Potential sources

  • Impact and program implementation evaluation analysis plan

Non-text elements

Please refer to the table shells document, which includes 508-compliant table shells for Tables V.1.

Instructions for completing Tables V.1

  • The purpose of these tables is to present the targets for measures of high quality implementation of the program and the results of the implementation evaluation of those measures.

  • The table shell contains example text in italics. Please delete that text before completing the table.

  • Each row should represent one research question your implementation evaluation answers, organized by implementation element. Add rows as needed to incorporate all of your research questions.

  • In column 1 and 2, list the research question and associated implementation element (fidelity, dosage, quality, engagement, context).

  • In column 3, list each of the measures that you will evaluate to answer the research question.

Non-text elements

(Continued)

  • In column 4, for each measure listed in column 3, provide the targets you prespecified and used, if applicable, to assess how well the program was implemented relative to program or developer standards.

  • In column 5, for each measure, provide a brief statement of the results of the evaluation relative to the target provided in column 4. The text of the section should expand on these results

B. Impact evaluation findings

Purpose

Present the impact results for the primary and secondary research questions.

Instructions and reminders

Present impacts of the program in tables and then discuss the findings in the text. We recommend that one subsection (and table) shows impacts for primary research questions and a separate subsection (and table) shows impacts for secondary research questions. Make sure each finding answers a given research question.

Please present and discuss the findings so that a lay reader (without a research background) can understand them. Avoid jargon or overly technical terms as much as possible. Please present the findings in a metric (for example, difference in proportions) that is easy for readers to interpret. For example, if the outcome is youth reporting sexual activity in the past 3 months, and the estimation method is logistic regression, the estimated coefficients, log odds-ratios, are not easily interpretable. Most statistical software packages will also display the odds-ratio, but that is not much more interpretable: the odds ratio for the treatment variable is the ratio of the odds of a treatment youth reporting sexual activity to the odds of a control youth reporting sexual activity. To produce a more easily interpretable impact estimate, request the mean marginal effects from your software package (using the “margins” command in STATA or using the “%margins” macro in SAS) and report it as the difference in prevalence rates of the outcome across conditions. However, we recommend using ordinary least squares regression as it is the easiest way to get an interpretable result and, as noted earlier, is a valid way to model impacts for binary variables. See this resource for more information.

Briefly elaborate on the findings and patterns of findings in this section, but save the broader discussion for the conclusion. (You should, for example, delay tying together implementation and impact findings until the conclusion.)



Include a summary of the similarities and differences in the impact findings across the sensitivity analyses. Include these sensitivity results in Appendix C and Table C.1 or C.2.

NOTE: If your analysis plan included additional analyses beyond impact analyses, you can include them in a section V.C.: Additional analyses.

Potential sources

None

Non-text elements

Please refer to the table shells document, which includes 508-compliant table shells for Tables V.2 and V.3.

Instructions for completing Tables V.2 and V.3

  • The purpose of these tables is to present the estimated effects.

  • Please replace the “(Behavioral) outcome X” text with the name of the outcomes for which you will report estimated effects.

  • Add rows as needed to represent all outcomes for which you will report estimated effects.

  • In columns 2 and 3, enter the model-based prevalence rate or (adjusted) mean. The model-based mean should adjust for baseline covariates.

    • If the measure is continuous, report the group mean with the standard deviation in parentheses below. If the measure is binary, report the proportion as a decimal (that is, 0.50 instead of 50 percent).

  • In column 4, enter the difference across groups and the p-value of this difference. The RETA team recommends conducting a regression model to assess the impact of the intervention in order to adjust for baseline differences and improve the precision of the impact estimate.

  • If Table V.2 presents multiple outcomes, the RETA team strongly suggests you make a multiple comparison correction. Please indicate in the table notes if any of the reported p-values that are smaller than 0.05 are not statistically significant after adjusting for multiple comparisons as well as the method you used to make the adjustment.

VI. Conclusion

A. Summary

Purpose

Summarize the impact and implementation findings.


Instructions and reminders

Restate the main impact and implementation findings and weave the impact and implementation findings together to create a coherent story about how program implementation might have influenced or provide additional context for any observed impacts (or lack thereof). For example, explain how program adherence, youth attendance, or implementation quality might help explain the observed impacts.

Potential sources

  • Earlier sections of this report

Non-text elements

None

B. Limitations

Purpose

Describe any limitations of the study.

Instructions and reminders

Describe the limitations of the study (for example, issues with randomization, study power, or implementation). Discuss how the limitations may influence the interpretation of your findings. For instance, if you had very low attendance in one cohort of youth, that likely meant there was a limited contrast between the actual experiences of the intervention and comparison groups in that cohort (as both groups had similar experiences of receiving none, or very little, of the intervention). If you found no statistically significant findings, this limitation around the contrast could help explain the findings.

Potential sources

  • Earlier sections of this report

Non-text elements

None

C. Discussion

Purpose

Synthesize the information presented and describe lessons learned.


Instructions and reminders

Present the implications of your evaluation and findings for the broader field. Discuss important lessons learned that explain the impacts or that could help others replicate the program or serve the same target population. For example, if you provided an online intervention, discuss how technology contributed to your evaluation and can be used in the future to address adolescent health education. Also include any areas for future research that you have identified based on this evaluation.

Potential sources

  • Earlier sections of this report

Non-text elements

None





VII. References

Purpose

Provide the full reference for any work cited in the report.

Instructions and reminders

Please use the American Medical Association’s style guide for citing works in the report. This section should include the full reference for any work cited.

Potential sources

None

Non-text elements

None

VIII. Appendices

Based on our guidance for the report sections, your report might include the following appendices (though it might not be necessary to include appendices for all of these items, in which case you should relabel your appendices to be sequential):

  • Appendix A: Logic model (or theory of change) for program

  • Appendix B: Implementation data and measures (see Table B.1 in the templates document, which is a 508-compliant table shell for the data used to address implementation research questions. Italicized text in the table are examples for expository purposes. The purpose of this table is to enable the reader to understand the data collected for the implementation analysis.)

  • Appendix C: Model specifications (equations) used in the assessment of baseline equivalence and program impacts.

  • Optional Appendix D: Methods used to clean and prepare data (including descriptions of how you handled missing and inconsistent data).

  • Optional Appendix E: Detailed descriptions of methods used to analyze the implementation data, including construction of measures from qualitative data.

  • Optional Appendix F: As noted above in Section III.B (Research design), for grantees that have high levels of sample attrition, and statistically significant differences in key variables measured at baseline, the RETA team recommends using a matching analysis as the benchmark analysis in the main body of the report. In this situation, we recommend including the intent-to-treat analysis of the offer of the program as an appendix in the report. We also recommend you provide more details on your matching process in the appendix.

  • Optional Appendix S: Details of sensitivity analyses, such as alternate approaches to missing data, inconsistent data, alternative model specifications, and so on). To document that the observed results presented in the body of the report are not attributable to researchers’ decisions about how to clean and analyze data, we recommend presenting sensitivity analyses in an appendix. The appendix should include a brief paragraph stating that this appendix evaluates the sensitivity of estimates to the various methodological decisions and summarizes (1) the benchmark approach for estimating program impacts and (2) reasons for sensitivity analyses—alternative approaches for estimating program impacts or analytic decisions.

Please describe the alternate approaches used to clean and analyze the data and present baseline equivalence and impact tables for these sensitivity analyses. In addition, if you are looking at changes over time in a particular measure, and the sample size changes substantially across follow-up periods, you should present a sensitivity analysis that looks at the results for each follow-up using a single sample (in other words, youth who responded to all the surveys). Looking at a single sample with data at each time point will allow you to demonstrate that compositional differences between the samples are not contributing to the observed results.

If the results from the sensitivity analyses differ substantially from the main results presented in the report, please provide detailed discussion on which set of results is more appropriate (expanding upon the summary provided in section IV.B). Please use tables to highlight alternate sensitivity results (see Table S.1 in the templates document for a table shell to describe sensitivity analyses for primary research questions and Table S.2 for a table shell for sensitivity analyses for secondary research questions).

For each sensitivity analysis, provide a paragraph describing how the result differs from (or replicates the result of) the benchmark approach. Conclude with a brief statement of the differences in magnitude, significance, or both.

Instructions for completing Tables S.1 and S.2

  • The purpose of these tables is to summarize the sensitivity of estimated impacts to methodological decisions.

  • Similar to Tables V.2 and V.3, replace the “(Non-)Behavioral outcome X” text with the name of the outcome measures for which you will report estimated effects.

  • Add rows as needed for additional outcomes.

  • For each column, enter names for the particular approach presented in those columns. The second and third columns should be titled “Benchmark analysis” and include the estimates presented in the body of the report (Tables V.2 and V.3). Other names should match the section headings in the appendix text that describes the approach.

  • Add columns as needed for the sensitivity analyses presented.

  • Enter the estimated effect and associated p-value from each analytic approach.



1 For information about Section 508 compliance, please see https://www.hhs.gov/web/section-508/index.html.

3



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleMathematica Standard Report Template
AuthorDPA-Fitts
File Modified0000-00-00
File Created2024-10-07

© 2024 OMB.report | Privacy Policy