RIViR_OMB_SSB_Oct 12 2017_clean

RIViR_OMB_SSB_Oct 12 2017_clean.doc

Responding to Intimate Violence in Relationship Programs (RIViR)

OMB: 0970-0503

Document [doc]
Download: doc | pdf



OMB No. XXXX-XXXX Expiration XX/XX/20XX


Responding to Intimate Violence in Relationship Programs (RIViR)

Supporting Statement B


New Collection






October 2017







Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

Mary Switzer Building

330 C Street, SW

Washington, DC, 20201




B. STATISTICAL METHODS

B.1 Respondent Universe and Sampling Methods

This study, funded by the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS), is designed to assess the psychometric properties of available IPV and TDV screening tools when used with HR program populations, and to compare how well each tool achieves the outcomes it was intended to produce (including differentiatiating HR program participants who are experiencing IPV or TDV from those who are not for purposes of guiding HR program staff in offering referrals to their local domestic violence program partners for full assessment and possible services, as well as helping HR program participants to feel informed and empowered in assessing and pursuing their options for IPV/TDV-related help). The respondent universe for the study will consist of all program participants who enter HR programs during the study enrollment period at four community-based organizations administering federally-funded HR programs.

B.1.1 Site Selection

With support from the Office of Planning, Research, and Evaluation (OPRE) and the Office of Family Assistance (OFA), RTI will select four HR grantee organizations that meet selection criteria indicating they have the capacity to successfully participate in the study. These criteria include:

  • plans to serve adequate numbers of participants in their programs to be able to enroll approximately 300 participants during the study time period,

  • opportunities to administer multiple screeners,

  • appropriate protocols for reporting and addressing IPV and TDV in collaboration with a local domestic violence program, and

  • the ability to obtain local IRB oversight for the study.

B.1.2 Selection of Respondents

The respondent universe for this data collection is defined as all HR program participants who meet eligibility criteria (outlined below) within the four designated HR grantee organizations funded by ACF’s OFA. No sampling will be conducted within sites; all participants who meet study eligibility criteria and who enter the HR program at a study site during the study enrollment period will be invited to participate. Employees of the HR programs (“grantee project staff”) will recruit participants who meet the eligibility criteria for their programs to participate in the study. Study sites that serve adults will recruit individual adults and adult couples aged 18 or older to participate in the testing of the three IPV screeners designed for use with adults. Study sites that serve youth will recruit high school aged youth (primarily younger than 18, but some youth participants may be 18 or older) to participate in the testing of three TDV screeners designed for use with youth. Study participants must be able to speak and read English. Parents of minors must be able to read English or Spanish. Recruitment will begin after IRB and OMB approval (anticipated February 2017) and will continue until target sample sizes are reached. The collected data will be specific to this set of federally funded HR program grantees; we do not aim to generalize our findings to other service providers or other individuals in dating or intimate partnerships, though we anticipate our findings will be salient for them.

B.1.3 Methods to Maximize Coverage

All grantee project staff will be trained in person by the RTI project team to ensure that every eligible HR program participant is invited to participate in this study. Study recruitment and consent procedures will be timed to coincide with program intake activities with each participant, such that they are integrated into existing staff workflows at each study site.

B.1.4 Power Analysis

Little guidance exists for calculating sample size requirements to use latent class analysis (LCA) to assess screener sensitivity and specificity; power analyses for screener testing tend to use traditional “gold standard” analysis (e.g., Hajian-Tilaki, 2014) and power analyses for LCA generally focus on testing the number of latent classes (Dziak et al., 2014). We plan to analyze the screener data in three different ways, only one of which is subject to traditional power calculations. The table below shows expected power and confidence intervals corresponding to the various analytic approaches for two different sample sizes (the targeted sample size of 600 and a smaller sample size of 400, assuming some degree of non-response). The first approach will be to directly compare screeners to each other and test for statistically significant differences between them. Small differences between screeners are unlikely to be practically meaningful, and a sample size of 600 will provide more than adequate power to detect medium differences (or effect sizes) between screeners. The second approach will be to estimate confidence intervals around the sensitivity and specificity of each screener, and the third will be to estimate confidence intervals around the differences between screeners. These latter two approaches do not specify power, but precision of the estimates: With a sample size of 600, the confidence intervals for these approaches will be relatively small.


Sample Size (for adult or youth sample; total sample is twice this number)

600

400

Power to detect a medium (.20) difference between screeners with alpha = .05

0.97

0.92

Confidence interval / margin of error around sensitivity/ specificity of each screener

+/- 0.048

+/- 0.059

Confidence interval / margin of error around difference between sensitivities/ specificities of 2 screeners

+/- 0.068

+/- 0.084

In addition, we expect to compare responses to the supplemental module by individual screening tool and also to compare responses to both closed-ended tools against responses to the single open-ended tool within each age group. Assuming 200 cases per group (200 completions per screening tool, as above), we will have power to detect an effect size of approximately .3, which represents a small to medium difference between individual screeners, or a 0.2-0.3 difference between the group means (assuming standard deviations of 1 or less). Within each age group (that is youth participants who completed TDV tools and adult participants who completed IPV tools) we will also combine responses associated with the two closed-ended tools (n=400) and compare them to responses associated with the open-ended tool (n=200). This approach will give us power to detect a smaller effect size (approximately .25).

B.2 Procedures for Collection of Information

B.2.1 Procedures for Training Grantee Staff on Data Collection

All grantee project staff administering data collection instruments will be trained in person by the RTI project team about the data collection protocol, including privacy guidelines, participant distress, procedures for transmitting study forms to RTI, and procedures for storing (or destroying) study data in electronic and hard copy forms. RTI will provide ongoing technical assistance to sites participating in the study.

B.2.2. Mode and Timing of Data Collection

Grantee project staff will administer a total of three screening tools to each participant: the instruments for adults are Instruments 1.1, 1.2, and 1.3, and the instruments for youth are Instruments 2.1, 2.2, and 2.3. An additional set of questions on the screening interaction, which appears as Attachment C.1, will be added at the end of two of the screeners. The first two questions, which are on gender identity and sexual orientation, will be asked at the end of whichever of the screeners a respondent completes first. The other questions, which are about to measure post-screening knowledge, opinions, and perceptions, will be asked at the end of whichever of the screeners is completed third.) Two of the tools for each participant are standardized, closed-ended questionnaires and one tool is an open-ended script. The project staff will administer all of the screening tools verbally to adults; they will administer the open-ended script verbally to youth but the closed-ended questionnaires will be self-administered by youth in keeping with self-administration procedures already used by youth-serving HR programs to securely and efficiently collect program intake data from youth in a group-based, classroom setting. Grantees routinely collect intake data electronically via a standardized system required by the program funder. For the screeners administered verbally, participants will respond individually in a private space such as the project office or (for youth) an office at their school; youth will complete the closed-ended screeners in groups at school, in accordance with current intake procedures used by the grantee projects. All of the instruments will be programmed using Voxco Web-based survey development software. For instruments administered by staff (the three adult instruments and the open-ended youth instrument), data will be entered by program staff on their computers, laptops, or tablets. For self-administered instruments (the two closed-ended youth instruments and the questions to measure post-screening knowledge, opinions, and perceptions), data will be entered by youth participants on program-assigned tablets.

Grantees are already required to collect extensive standardized demographic information from program participants at intake; to reduce respondent burden, we will work with the contractor responsible for managing these data to obtain the data on participants in our study for selected demographic variables needed to accomplish our study aims, including modeling non-response and attrition bias. This is noted in the consent forms (Attachments B.1 and B.7) and assent form (Attachment B.5). We have included in Attachment C.1 a set of demographic items that are not already collected, which will be administered after the first screening instrument, and items to measure post-screening knowledge, opinions, and perceptions, which will be administered after the third screening instrument.

Screener administration will happen in program settings (at intake and then during two subsequent program activities based on site-specific program workflow). The first screener will be administered right after participant consent or assent is obtained. The second screener will be administered at least 2 days but no more than one month after the first screener, and the third screener will be similarly spaced after the second. The administration of additional screeners will coincide with program activities or local evaluation data collection where possible. The survey system will be programmed to administer the instruments in a random order for each participant.

B.2.3. Respondent Consenting and Assenting Procedures

Grantee project staff will recruit study participants in person, either at the time of intake for their program or during a program-related contact after intake. Parents of minors will receive a lead letter from the grantee project describing the study (Attachment A.1). Adults or youth aged 18 or older will be recruited individually by grantee project staff using the scripts in Attachment A.2 and A.3.

Grantee project staff will seek written consent from adults or youth aged 18 and older to participate in the study prior to administering the screeners. Before administration of the first screener, the project staff will distribute a consent form (Attachment B.7) and go over it using a script (Attachment B.8). Individuals who agree to participate in the study will be asked to sign the last page of the consent form and will be given the remainder of the form to keep.

For minors (aged 17 or younger), grantee project staff will seek parent permission (either in writing or by telephone with mailed documentation) and written assent to participate in the study prior to administering the screeners. Parents will receive a parent permission form (Attachment B.3) sent home with the youth. Parents will be instructed to keep the study information and return the last page of the parent permission form with parent signature and youth’s full name to grantee project staff. Alternatively, project staff may read the parent permission form to the parent over the phone (using the script in Attachment B.4) and document on the permission form whether they received verbal permission; they will mail this documentation to the parent for their records. The parent permission form will be translated into Spanish and submitted to the IRB for review in an amendment.

Immediately before administration of the first screener, grantee project staff will distribute an assent form to youth and will read an assent script (Attachments B.5 and B.6), or will distribute a consent form and use a consent script for youth aged 18 and older (Attachments B.7 and B.8). This will be done in school, and may be done individually with youth (i.e., if they are pulled out of class to complete the interviewer-administered screener first) or with groups of youth (i.e., for those who will complete a self-administered screener first). Youth who agree to participate in the study will be asked to sign the last page of the assent form and will be given the remainder of the form to keep. Grantee project staff will seek written youth consent from emancipated minors. For the purposes of this study, emancipated minors are youth who are either emancipated by a court or who are emancipated by statute in their state of residence (e.g., as a result of marriage or parenthood) to participate in research.

Signed consent, assent, and parent permission forms (or documentation of verbal parent permission) will be scanned electronically and uploaded to a secure website for sharing with RTI. Grantee project staff will be required to provide the electronic version of the completed forms to RTI prior to or on the same day that they begin data collection with each participant.

B.2.4. Estimation Procedures

Traditional psychometric analyses require identification of a “gold standard” measure for the purpose of assessing sensitivity and specificity of a focal screener. However, the existence of a “gold standard” measure of IPV is widely contested, and no “gold standard” measure exists for TDV. Our alternative analytic approach, LCA, circumvents this problem by testing a model for validity that is based upon the comparison of screening approaches to each other, rather than requiring a gold standard against which another tool is compared. LCA also reduces respondent burden by eliminating the need for respondents to complete a (typically long and detailed) “gold standard” measure whose properties are not the subject of investigation. Using LCA, we will be able to (1) examine differences between screeners in their sensitivity and specificity, (2) assess the precision of estimates of sensitivity and specificity for each screener, and (3) assess the extent to which screener specificity/sensitivity and/or rank order differ by grouping variables we add to the model (e.g., sex, race/ethnicity, education).

To use LCA most effectively, we need each participant to complete three screeners; however, we will include all participants in analyses regardless of missing data (i.e., if they are not able to complete all screeners), and model missingness to identify possible correlates. LCA assumes that the classification errors inherent in each screener are independent of one another (referred to as local independence), but allows for modeling of dependence between underlying screener constructs. We will separate administrations of each screener in time to reduce local dependence as much as possible.

LCA will be used to model classification error in the screening tools and will produce estimates of response probabilities, reliability, and bias (such as social desirability or interviewer bias), as described by Biemer (2011). LCA estimates the “true” classification for each individual and treats it as a latent variable, using maximum likelihood estimation to assess the relationship of the indicator variables produced by each screening tool to the latent variable. To allow the true value associated with a person to vary across the three screening time points, we will estimate Markov latent class models (MLCMs). Grouping variables (e.g., participant demographic characteristics) mentioned previously will be added to the models to improve model fit and identifiability, and to test variation in screener performance by these characteristics. Model fit may also be improved by analyzing physical violence and coercive control as two correlated latent variables within the same model.

To assess how open- and closed-ended screeners compare from participants’ perspectives, we will then analyze data from the post-screener items on knowledge, opinions, and perceptions (see Attachment C.1). First, we will group items conceptually, according to which of the following constructs they correspond to: perceptions of screening questions, trauma-informed practice, survivor-defined practice, self-efficacy for harm reduction, safety-related empowerment, and decisional conflict. We will then conduct a basic factor analysis to identify empirical correlations among these items that indicate whether and how they might be grouped into summed composites, which will serve as the dependent variables in our analysis. We will construct multiple regression models to assess the influence of the reference screening tool on each of the dependent variables, while controlling for site and/or site-related demographic variables that are independently correlated with the dependent variables.


B.3 Methods to Maximize Response Rates and Deal with Nonresponse

We will employ several methods to maximize HR program participant response rates for the screeners, while ensuring that HR program participants understand that their decision to participate or not will have no effect on any services they receive from the program.

B.3.1 Tokens of Appreciation

Offering a token of appreciation for study participation will help gain cooperation from a larger proportion of the sample. Promised tokens of appreciation have been found to be an effective means of increasing response rates and reducing nonresponse bias by gaining cooperation from those less interested in the topic (Cantor, Wang, and Abi-Habib 2003; Groves Couper, Presser, Singer, Tourangeau, Acosta, and Nelson, 2006; Groves, Singer, and Corning 2000). In addition, studies have demonstrated tokens of appreciation as effective in retaining participants in longitudinal studies (Booker et al., 2011), and specifically in retaining youth in longitudinal studies involving sensitive topics (Henderson et al., 2008), thereby reducing nonresponse bias associated with attrition. In order to encourage response for each of the screeners, participants will receive a $5 token of appreciation each time they complete a screener (as opposed to receiving a blanket token of appreciation for study participation), with a $5 bonus for completing all three screeners and a $5 bonus for completing the parent permission form (for youth). (Justification for this token of appreciation amount can be found in Supporting Statement A).

B.3.2 Integration into Regular Program Activities

At each site, screening tools will be administered as part of regular program participant interactions with program staff. This strategy will maximize ease for respondents and eliminate the need for respondents to remember or travel to special appointments associated with the study.

B.3.3 Addressing Non-Response Bias

Composite scores for analysis will be computed for the closed-ended screeners when at least 75% of the items are completed. Randomly ordering administration of the three screeners for each respondent will ensure comparable rates of missingness on each due to non-response or attrition from the study. Participants with responses to at least one screener will be included in analyses. Missing data will be modeled as part of the LCA using full information likelihood methods (Biemer, 2011) and variables that may be related to missingness will be examined as correlates.


B.3.4 Response Rate Calculation

A single response rate will be calculated to assess the degree to which survey participants represent the population of interest, which is participants in four federally funded HR programs. The response rate will be computed based on the American Association of Public Opinion Research (AAPOR) response rate formula (AAPOR, 2008). The AAPOR calculation is a standard developed by researchers and established as a requirement by a leading journal for survey methodology (Public Opinion Quarterly). This particular formula is the most commonly implemented formula that 1) accounts for ineligibility among cases with unknown eligibility; and 2) treats partial completions (by respondents who have answered all pre-identified essential questions) as participating cases.

B.4 Tests of Procedures or Methods to be Undertaken

The four closed-ended screeners included in this field study have been the subject of extensive testing in a variety of research and clinical settings. The versions to be included in the RIViR study are the most recent available, reflecting iterative improvements by their developers (McLanahan, 2003; Centers for Disease Control and Prevention, 2017; Fernandez-Gonzalez, Wekerle & Goldstein, 2012; Heron, Thompson, Jackson, & Kaslow, 2003; Jory, 2004; Smith, Earp, & DeVellis, 1995). The two open-ended screeners were adapted by RTI from open-ended tools recommended by our panel of academic and practitioner experts: Futures Without Violence: Addressing Intimate Partner Violence Reproductive and Sexual Coercion (Futures Without Violence, 2013) and Is Your Relationship Affecting Your Health? (Futures Without Violence, 2012). The two open-ended approaches include safety cards and other resources for integrating and sustaining a trauma-informed, coordinated response to IPV and reproductive and sexual coercion in service delivery settings. No evidence yet exists on the effectiveness of these tools; establishing their effectiveness at distinguishing participants who are experiencing IPV or TDV from those who are not is one of the focal aims of this study.

Each of the closed-ended screeners will be programmed using Voxco, a programming language suitable for web-based administration of the tools. RTI will develop and implement a rigorous internal testing protocol and set of mock scenarios designed to test all skip patterns in the tools. Once multiple test cases have been generated, RTI will run frequencies and cross tabs to identify any issues in the resulting test data. All testing results will be shared with OPRE, and any issues identified during this internal testing phase will be addressed. Data collection in all sites will be closely monitored to ensure that all skip patterns and data collection procedures are operating correctly. Any necessary adjustments to the web-based Voxco screeners or screener administration procedures will be made during the initial weeks of data collection. An OMB Change Request will be submitted if there is an increase in burden.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and Analyzing Data


B.5.1. Individuals who have participated in designing the RIViR effort:

Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services Staff

Email Address


Samantha Illangasekare, Project Officer

Samantha.Illangasekare@acf.hhs.gov






              

RTI International Staff

Email Address

Anupa Bir, Principal Investigator

Tasseli McKay, Project Director

abir@rti.org

tmckay@rti.org

Monique Clinton-Sherrod, Associate Project Director

mclinton@rti.org

Marni Kan

mkan@rti.org

Kate Krieger

kkrieger@rti.org

Stacey Cutbush

scutbush@rti.org

Julia Brinton

jbrinton@rti.org

                                            

       

Expert Consultants

Email Address

Michael Johnson, Professor Emeritus, Penn State University

mpj@psu.edu

Anne Menard, Director, National Resource Center on Domestic Violence

amenard@nrcdv.org

Oliver Williams, Director, Institute on Domestic Violence in the African American Community

owillia@idvaac.org

Sandra Martin, Professor, University of North Carolina at Chapel Hill

smartin@unc.edu


Joe Jones, Founder and CEO, Center for Urban Families

jjones@cfuf.org

Elizabeth Miller, Professor, University of Pittsburgh Medical Center

Elizabeth.miller@chp.edu


               

B.5.2 Individuals who will participate in the collection of RIViR data:

RTI International Staff

Email Address

Anupa Bir, Principal Investigator

Tasseli McKay, Project Director

abir@rti.org

tmckay@rti.org

Monique Clinton-Sherrod, Associate Project Director

mclinton@rti.org

Marni Kan

mkan@rti.org

Kate Krieger

kkrieger@rti.org

Stacey Cutbush

scutbush@rti.org

Julia Brinton

jbrinton@rti.org



B.5.3 Individuals who will participate in RIViR data analysis:

        

RTI International Staff

Email Address

Anupa Bir, Principal Investigator

Tasseli McKay, Project Director

abir@rti.org

tmckay@rti.org

Monique Clinton-Sherrod, Associate Project Director

mclinton@rti.org

Marni Kan

mkan@rti.org

Kate Krieger

kkrieger@rti.org

Stacey Cutbush

scutbush@rti.org

Julia Brinton

jbrinton@rti.org

      

                                            



File Typeapplication/msword
File TitleSupporting Statement for Paperwork Reduction Act Submission
AuthorPeyton Williams
Last Modified BySYSTEM
File Modified2017-10-16
File Created2017-10-16

© 2024 OMB.report | Privacy Policy