SUPPORTING STATEMENT – PART B
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
The collection of information will employ statistical methods, and thus the following information is provided in this Supporting Statement:
1. Description of the Activity
The respondent universe includes students at the United States Naval Academy (USNA) and the United States Military Academy (USMA) who are part of their respective academy classes of 2023 and 2025. (The classes of 2023 are juniors at the respective service academies, as of the summer of 2021, and the classes of 2025 are first year students/plebes/freshmen as of the summer 2021.) This is a new collection. Eligible participants include the entire cohort of each of these classes, who will each be recruited to participate in a baseline and a follow-up survey. While attrition at follow-up is a common phenomenon in survey data collection, USNA and USMA students are not usually offered tokens of appreciation for survey participation at their respective academies, and thus the current data collection is expected to achieve a strong response rate.
|
Baseline survey, summer 2021 |
Follow-up survey, spring 2023 |
USNA class of 2025 |
N~1,000 |
N~1,000 |
USNA class of 2023 |
N~1,000 |
N~1,000 |
USMA class of 2025 |
N~1,200 |
N~1,200 |
USMA class of 2023 |
N~1,200 |
N~1,200 |
2. Procedures for the Collection of Information
Statistical methodologies for stratification and sample selection;
Sexual assault prevention and response (SAPR) prevention programming at USNA is a four-year program. Thus, respondents will be recruited from the class of 2025 and the class of 2023, and surveyed in the summer of 2021 and again in the spring of 2023, in order to provide coverage across the entire 4-year program experience at USNA. The sample selection at USMA has been designed to match the sample selection at USNA. There is no stratification in the sample design, all eligible respondents are included in the sample.
b. Estimation procedures;
No sample estimation procedures are included in this program evaluation design.
c. Degree of accuracy needed for the Purpose discussed in the justification;
To ensure the credibility of evaluation findings, NORC has conducted statistical power calculations to determine the credibility of detecting a significant program effect at specific sample sizes. Statistical power provides an estimate of the probability of identifying a relationship through a significant statistical test when, in fact, such an impact exists. To calculate our power estimates, we used formulas for computing the expected test statistic found in many power analysis texts in conjunction with Microsoft Excel’s routines for evaluating the standard normal curve.
Since the primary analyses will compare the treatment group (USNA) against the comparison group (USMA), power estimates were computed for N=2,000 for the plebes/freshman class quasi-experiment (1,000 USNA midshipmen receiving the treatment and at least 1,000 USMA cadets in the comparison condition), for the various effect sizes. We will also have a second quasi-experiment for 1,000 midshipmen and 1,000 USMA cadets in their third year at their respective academies. For the first quasi-experiment, taking into account that midshipmen are clustered in companies (and a similar structure for the comparison group), to assess the effect of the intervention for treated midshipmen in a clustered design, with 30 companies and about 30-35 students in each company, a total projected sample of 1,000 treatment cases and 1,000 comparison cases, with an Intraclass Correlation Coefficient (ICC) of 0.1, the statistical power of our evaluation will be .93 to identify a standardized mean difference/effect size of .24. This calculation is based upon an alpha level of .05, a two-tailed statistical test, and covariates that explain 25% of outcome variation (i.e., a pre-test). An effect of .24 is considered a small effect size based on Cohen’s formulation (1988).1
For this main scenario, our power level is over .80 for any effect size of .20 or above (still in the small effect size range). Our power level is higher for larger effect sizes. This scenario of 1,000 treatment cases and 1,000 comparison cases will also provide ample power to explore subgroup differences (e.g., differences by gender). The second quasi-experiment would have the same results as the first if analyzed separately, but if the two experiments were combined (n= 4,000) power would be even greater (power would be .98 to detect an effect size of .20).
d. Unusual problems requiring specialized sampling procedures; and
We do not anticipate a need for specialized sampling procedures given the study design.
e. Use of periodic or cyclical data collections to reduce respondent burden.
Data collection will occur twice for each respondent; once in summer 2021 and once in spring 2023. Two data points are necessary to identify change over time. Surveys are administered prior to the intervention and again in the spring of the following academic year, after experiencing half of the intervention. Without a follow-up data collection at both academies, we will be unable to assess the outcomes associated with the SH/SA prevention programming. In other words, given the need to identify change over time, two surveys (pre/post) is the least number of surveys possible.
Respondents will be recruited from the class of 2025 and the class of 2023 in order to provide coverage across the entire 4-year program experience at the respective academies. We have minimized the survey burden on the academy student population by limiting study recruitment to these two classes (instead of all four classes), and by extending the follow-up period over two academic years (instead of at the beginning and the end of each academic year).
3. Maximization of Response Rates, Non-response, and Reliability
Because the sample design includes the entire student cohorts for the class years of 2023 and 2025 at USNA and USMA, there is no sampling design to adjust for in the final program evaluation analyses. The results of this program evaluation research will be generalizable to the respective student bodies at USNA and USMA.
There are several facets of the research design that will contribute to a strong response rate for this data collection and thus the overall reliability and validity of the program evaluation effort. The research team has been engaged with the service academy SAPR and sexual harassment and assault response and prevention (SHARP) leadership over the past eight months to fully understand the context of the sexual harassment (SH) and sexual assault (SA) prevention programming as well as approaches to survey implementation with academy students to ensure a strong design. Academy leadership is also fully informed of and approves the program evaluation design. The recruitment protocols and survey language have been carefully reviewed with SAPR and SHARP staff; with a small panel of consultants which includes experts in the fields of SH and SA prevention, both within and outside the military context; and with a small sample of volunteer USNA midshipmen to assure that the recruitment and survey language is understandable and acceptable to the target population of academy students in the classes of 2023 and 2025. NORC has developed the data collection protocols to be consistent with best practices in the field on survey design and implementation. NORC has tested the online anonymous survey link on different web browser platforms using different NORC laptops and personal computing devices to ensure that recruited participants will encounter a user-friendly design without technical glitches, enabling easy survey participation.
Finally, respondents will be offered incentives for survey participation based upon the guidance of the respective military academy leadership and following established practices in the field of survey research for reliable and valid data collection. As noted above, while attrition at follow-up is a common phenomenon in survey data collection, USNA and USMA students are not usually offered tokens of appreciation for survey participation at their respective academies and thus the current data collection is expected to achieve a strong response rate. Tokens of appreciation have been tailored to local context and geared to specific classes based on their perceived interest in the survey. Specifically:
Participants from the USNA class of 2025 will not receive an incentive during the baseline administration, consistent with past years of internal surveys conducted with USNA incoming first year students. For the baseline survey, participants from the USNA class of 2023 will receive a $10 gift code to USNA campus stores and coffee shops. For the follow-up survey in the spring of 2023, participants from both USNA classes (2023, 2025) will receive a $15 gift code to USNA campus stores and coffee shops.
Participants in the baseline survey from the USMA class of 2025 and the USMA class of 2023 will receive one “PMI” (which stands for PM Inspection), a relief of morning duty time. For the follow-up survey in the spring of 2023, participants from both USMA classes (2023, 2025) will receive two PMIs. (Notably, NORC IRB and USMA HRPP have approved two alternative tokens of appreciation plans for the USMA participants: (1) that USMA cadets’ tokens of appreciation for participation would match the USNA plan; (2) that there would be no tokens of appreciation for USMA cadets’ participation. However, at this time USMA leadership plans for NORC to implement the “PMI” token of appreciation plan.) DoD’s Office of General Counsel (OGC) noted no legal objections to the use of tokens of appreciation, and the leadership at USNA and USMA have reviewed and approved these token of appreciation structures.
4. Tests of Procedures
The survey recruitment and survey language was tested through cognitive interviews with a small sample of midshipmen from outside the target study cohorts. The email recruitment connectivity and anonymous online survey connectivity (i.e., the technological aspects of the data collection) will be tested with the respective USMA and USNA staff involved in planning the data collection.
5. Statistical Consultation and Information Analysis
a. Provide names and telephone number of individual(s) consulted on statistical aspects of the design.
Elizabeth A. Mumford, PhD mumford-elizabeth@norc.org
Office telephone (301) 634-9435
4350 East West Highway, Bethesda MD 20814,
Bruce Taylor, PhD taylor-bruce@norc.org
Rachel Feldman, PhD feldman-rachel@norc.org
Octavia Powell powell-octavia@norc.org
Joshua Lerner, PhD lerner-joshua@norc.org
b. Provide name and organization of person(s) who will actually collect and analyze the collected information.
Cynthia Simko, MA simko-cynthia@norc.org
office telephone (312) 759-4066
55 E. Monroe Street, Suite 3000, Chicago, IL 60601
Christopher
Wong wong-christopher@norc.org
Mireya Dominguez dominguez-mireya@norc.org
1 Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, New Jersey: Lawrence Erlbaum Associates; 1988.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Patricia Toppings |
File Modified | 0000-00-00 |
File Created | 2021-11-27 |