A Supporting Statements_A Justification_with OMB revisions 2-24-14

A A Supporting Statements_A Justification_with OMB revisions 2-24-14.docx

Methodological Research to Support the NCVS: Self-Report Data on Rape and Sexual Assault: Pilot Test

OMB: 1121-0343

Document [docx]
Download: docx | pdf

13


Supporting Statements A for Methodological Research to Support the NCVS: Self-Report Data on Rape and Sexual Assault: Feasibility and Pilot Test



February, 2014














Table of Contents

Section Page


Table of Contents (Continued)

Tables Page






Supporting Statement

A. Justification

A1. Necessity of Information

The Bureau of Justice Statistics (BJS), of the U.S. Department of Justice, requests clearance for activities related to the National Crime Victimization Survey Redesign Research (NCVS-RR) program. BJS, in consultation with Westat under a cooperative agreement (Award 2011-NV-CX-K074 Methodological Research to Support the National Crime Victimization Survey: Self-Report Data on Rape and Sexual Assault - Pilot Test), has planned methodological research to develop and test two different survey designs for collecting self-report data on rape and sexual assault. This activity falls under authorities of the Omnibus Crime Control Act of 1968 in which BJS is charged to “conduct or support research relating to methods of gathering or analyzing justice statistics;” (Section 302(c)(12)). In December 2012, a clearance request for cognitive interviewing to test instruments was approved under the NCVS-RR OMB generic clearance agreement (OMB Number 1121-0325). In accordance with the aforementioned request, BJS seeks a separate clearance to conduct feasibility and pilot testing.


Over the past two decades, there have been a number of competing national estimates of the level of rape and sexual assault in the U.S. The official estimates of these crimes released by BJS and based on the NCVS have typically been lower than estimates obtained from surveys contracted for by other federal agencies and by private groups (Black et al., 2011; Koss & Gidycz, 1985; Tjaden & Thoennes, 2000; Kilpatrick, 2007; Fisher, 2004). For example, estimates of rape from the National Violence Against Women study are approximately 4 times higher than comparable NCVS estimates (Tjaden & Thoennes, 2000). Estimates of rape from the National Survey of Intimate Partner Violence (Black, et al, 2011) are approximately 10 times higher than the NCVS (Truman, 2011).1 The differences that arise from these studies have resulted in debate over the best method for collecting self-report data on rape and sexual assault (Fisher and Cullen 2000). This has led to some confusion on the level of rape and sexual assault in the nation (e.g., Gilbert, 1997; Lynch, 1996; Rand and Rennison, 2005; Bialik, 2013). There is no consensus in the field on the optimum set of procedures for self-reports of rape and sexual assault and to date, no survey has employed all of the apparently beneficial design features available.


Some of the differences in these estimates result from more or less inclusive definitions of rape and sexual assault. The NCVS, for example, emphasizes felony forcible rape, while other surveys employ a much more inclusive definition. Even when surveys use comparable definitions, however, the methodology used to elicit reports of these events can differ dramatically and produce very different estimates of the incidence of these crimes. A number of discussions have taken place regarding the desirability of various survey design features, including sample design, screening strategy, reference period, bounding, cueing strategy, context, and respondent selection. In addition, differing interviewing modes have been discussed, including telephone interviews, in-person interviews, and more recently, Audio Computer Assisted Self-Interviewing (ACASI).


For example, the NCVS begins with an in-person visit during which the respondent is administered a two-stage instrument consisting of a crime screener and detailed incident form. Details of the event collected in the incident form are used for crime classification. The reference period for the NCVS is six months, and interviews have historically been bounded by a prior interview or adjusted to account for the telescoping of events into the reference period. As an indicator of crime, one emphasis has been to count the number of incidents that occur. In addition, this emphasis leads to a criminal justice oriented survey which asks about “criminal victimization”. The screening instrument covers a wide range of victimization including serious violence (rape, sexual assault, robbery, aggravated assault, simple assault) and property crimes (household burglary, motor vehicle theft, property theft). The coverage and response rate of the survey is relatively high compared to other surveys.


An alternative to the criminal justice approach in the NCVS is the “public health” approach that focuses on a broader range of interpersonal violence for extended reference periods, generally lifetime and within the last 12 months. These surveys utilize behaviorally specific questions and cue respondents with explicit reference to actions that make up the definition of a rape and/or sexual assault. These concepts are generally introduced to respondents as a survey covering health, injuries, or safety. The surveys classify events according to the initial questions that reference the behaviors, with no follow-up to count the number of occurrences or to assess the nature of the event. Most of the studies completed to date have used random digit dial (RDD) sample designs within a centrally monitored computer-assisted telephone interviewing (CATI) facility.2 The response and coverage rates are generally lower than the NCVS, but the interviewers tend to be more closely monitored in a central data collection facility.


The purpose of this project is to identify, develop, and test alternative methods for collecting self-report data on rape and sexual assault. The proposed work will enhance our understanding of the discrepancies that arise from differing self-report methodologies used in measuring rape and sexual assault and will assist in determining the optimal design components for measuring these crimes. The results of this project will be used to redesign the methods used on the NCVS to collect data on rape and sexual assault.


The project will collect data on rape and sexual assault among adult females. BJS intends to ultimately collect data on rape and sexual assault for both genders (as is currently the case). However, the prevalence of rape and sexual assault among men in the general population is at least 10 times lower than for women. Because of this low prevalence, it is impractical for this study to include males. Neither the NCVS nor the NISVS, for example, can produce a reliable 12-month estimate for males. The proposed study has significantly smaller sample sizes then either of these studies. The NISVS, for example, interviewed less than 30 male victims out of the 10,000 that completed the survey. Prior studies that have included men have not reported any special measurement issues associated with the gender of the respondent. They have all used questions and procedures that were very similar for both genders. The goal of this project is to develop a methodology using females as the respondents. This methodology will be adapted for males when the survey is being finalized for its final production format.


The goals of this project include developing a methodology for measuring rape and sexual assault within the NCVS program, comparing the methodology to existing methods, and evaluating the quality, utility, and cost of the methodology.


From these goals arise three objectives:


  1. Develop and pilot test an optimal design using Audio Computer Assisted Self Interviewing (ACASI) to collect self-report data on rape and sexual assault.

  2. Develop and pilot test a comparison design using Random Digit Dialing (RDD) to collect self-report data on rape and sexual assault.

  3. Conduct detailed analytical comparisons of the two designs against each other and the existing NCVS program.

The primary research questions for the project are:


  1. What are the differences in data quality of the two approaches?

  2. How do the two approaches compare with the current NCVS measures?

  3. What are the comparative costs of the two approaches? Given these costs, how would a survey on rape and sexual assault fit within the ongoing NCVS program?

In conjunction with this research BJS has commissioned an ad hoc panel from the National Research Council's Committee on National Statistics (CNSTAT) to assess the quality and relevance of statistics on rape and sexual assault. The panel is examining the legal definitions in use by the states for these crimes, best methods for representing the definitions in survey instruments so that their meaning is clear to respondents, and best methods for obtaining as complete reporting as possible of these crimes in surveys, including methods whereby respondents may report anonymously.


The BJS/Westat project team has met with the CNSTAT panel over the course of the development of the study design described in this package. These meetings have provided the project team with information on critical issues related to the design of a survey collecting data on rape and sexual assault. The project team has also presented preliminary designs to the panel during their public workshops. BJS plans to use the results of both the panel and this project when making final decisions on the methods used to collect data on Rape and Sexual Assault.



A2. Needs and Uses

The benefits of this research include ─


  • Evaluation of the accuracy, utility, and costs of improved collection procedures relative to those used heretofore.

  • Understanding the types of events that are reported using different methodological approaches that differ by coverage, response rates and mode of interview.

  • Determination of whether the optimal design can be accommodated within the current NCVS program or whether an alternative collection is necessary.

  • Development of improved measurement of rape and sexual assault.

  • Improved national estimates of rape and sexual assault.

  • Improved data collection methodology and measurement within the NCVS program.



External Data Users and Stakeholders

Under the Omnibus Crime Control Act of 1968, the BJS is charged with, in part, to “collect and analyze information concerning criminal victimization….” (Section 302(c)(2)) This study falls under the purview of these mandates by improving the methods used to collect data on rape and sexual assault, an offense that is particularly difficult to document in other ways.


The reports and data generated by this research will be of interest to a wide range of audiences, including government agencies, the criminal justice community, and the public. As noted in section A.1, there are several competing estimates of rape and sexual assault that are used by governments, service providers and researchers. By addressing the above goals, BJS will be able to design methods that address the shortcomings of these different approaches into a single methodology.



Uses by Federal, State and Local Governments

Because the NCVS is the only ongoing vehicle for producing data related to a broad spectrum of subjects related to crime and crime victimization, legislators and policymakers at all levels of government rely on the NCVS data. Some specific examples of government agencies that will make use of the results of this study:

Department of Justice, Bureau of Justice Statistics - Under the Omnibus Crime Control Act of 1968, the BJS is charged with, in part, to


“collect and analyze information concerning criminal victimization….” (Section 302(c)(2))


collect and analyze data that will serve as a continuous and comparable national social indication of the prevalence, incidence, rates, extent, distribution, and attributes of crime, juvenile delinquency, civil disputes, and other statistical factors related to crime, civil disputes, and juvenile delinquency, in support of national, State, tribal, and local justice policy and decisionmaking….” (Section 302(c)(3)).


This study will provide a way to meet these mandates by improving the methods used to collect data on rape and sexual assault, an offense that is particularly difficult to document in other ways.


U.S. Congress - The NCVS is currently the only national collection that has the size and statistical precision to produce annual estimates for these types of crimes. Improved estimates of rape and sexual assault will allow the Congress to evaluate current laws related to these crimes and assess whether there need to be changes for different populations.


State and local criminal justice agencies– will use data from this project to provide a common set of concepts, standard definitions, and counting rules that administrators will be able to use as a baseline for comparison. For example, when allocating resources to detection of incidents and providing victim services, the number of rapes and sexual assaults is often used. However, because NCVS differs so markedly from other estimates, policymakers are not sure what the true level is. This project will illuminate why there may be differences and which estimate is most appropriate to use when planning.



Educational Institutions

Many researchers use the NCVS data to prepare reports and scholarly publications. NCVS public-use data files housed at the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan were downloaded nearly 15,000 times from 2007 to 2012. The downloaded data are used in conjunction with research projects in a number of academic disciplines, including sociology, criminology, psychology, and political science. This project will provide data to researchers of rape and sexual assault on the measurement properties of the alternative approaches. There have been a number of studies that compared and contrasted the two different sets of estimates (e.g., Fisher and Cullen, 2000; Fisher,2004; Lynch, 1996; Rand and Rennison, 2005; Tjaden and Thoennes, 2000). This study will directly compare estimates from the two different methodologies across several different data quality dimensions.



Others

Other groups also use the NCVS for victim assistance, policy analysis, policy recommendations, testimony before Congress, and documentation for use in courts. Examples including the following:


National Crime Prevention Council - uses the NCVS data to develop programs on crime prevention and to train and educate individuals, communities, and organizations throughout the United States on effective crime prevention practices. An improved measure of rape and sexual assault will assist the NCPC in deciding on the scope and reach of programs that are concerned with prevention and treatment of rape and sexual assault victims.


Victim Advocacy Groups - use the data to identify vulnerable populations, crime victims that do not receive necessary criminal justice system resources, and to draw attention to the emotional, physical, and economic consequences of victimization. These groups heavily rely on national figures, such as those produced by the NCVS, to determine funding needs and outreach of programs. The “dueling” estimates produced by the NCVS and the public health surveys create some confusion on the nature and extent of the problems that these groups are trying to treat. This project will clarify the differences in these estimates and provide an improved measure of rape and sexual assault.


Print and broadcast media - have become increasingly familiar with the NCVS data, and the public regularly views news articles and press releases containing NCVS data. Findings from the NCVS appear regularly in a wide variety of contexts on television, radio, in print, and online when reporting on a host of crime-related topics.



A3. Use of Technology

Field: Field interviewers will conduct interviews using laptop computers. The initial contacts will be conducted using computer assisted personal interviews (CAPI). In conjunction with the CAPI, the interviewer will be using an electronic call record, which provides a detailed history of the attempts that have been made to contact the household.


Because of the sensitivity of the sexual assault questions, women will enter the answers themselves using ACASI technology. Research with ACASI suggests respondents provide higher reports of sensitive behaviors when the questions are administered via ACASI as opposed to traditional interviewer-assisted methods (Tourangeau and Yan, 2007). Increased reporting is often considered to represent better data, since responses to sensitive questions like rape and sexual assault are generally thought to be underreported; however, few studies have confirmed this with external validation (e.g., Kreuter et al, 2008). The present study will use several methods to assess the quality of the reports, including administration of a detailed incident form, debriefings and a re-interview of a subsample of the respondents. When compared to the interviewer-administered version of the survey (see below), the analysis will provide some assessment of the quantity and the quality of ACASI reports.


This methodology also allows respondents to hear each question through headphones as it appears on the screen. In addition, the ACASI methodology allows respondents with low literacy levels to participate because the audio component provides clear instruction for how to indicate.


ACASI technology improves the flow of the interview through built-in skip patterns and filled-in reference periods that tailor specific questions to individuals. This technology also produces more accurate data through built in edit checks.


To support monitoring and quality analyses, we will record the interviewing portion of the field visits for which respondents agree to be recorded.


Phone: The telephone interviewing effort will be conducted by the distributed staff of the Telephone Research Center, and will make use of several types of technology in support of this data collection effort. The telephone survey will use computer-assisted telephone interviewing technology (CATI). CATI allows for complex skip patterns in the questionnaire, which reduces administration time and respondent burden and allows questions to be tailored to specific subpopulations, as needed. Interviewer accuracy is increased over paper and pencil survey administrations because edits can be programmed and out-of-range or inconsistency in response can be quickly resolved with the respondent. Data are written directly into a data base, eliminating the need for time-consuming coding and data entering.


In addition, the telephone calling effort will be supported by a telephone system which coordinates the calling efforts for staff based in physical telephone centers as well as those working from their homes. The telephony system is capable of three-way conferencing, which will be used if respondents experience distress and need to be connected to a counselor.


A call scheduler will distribute cases in need of calls to the most appropriate type of staff in any location, with all interviewers coded into specific work classes representing different interviewer qualifications or capabilities. For example, cases requiring calls by Spanish-English bilingual interviewers will only be delivered to those in the Spanish work class, and cases in need of refusal conversion will only be delivered to interviewers in the refusal work class.


Specifically for the landline sample, predictive dialing will be used when at the initial household screening level for the landline telephones in an effort to reduce costs and increase productivity and efficiency for the interviewers dialing these cases. Once contact has been made with someone at the sampled landline telephone number, the case will switch out of the predictive dialing mode and into a traditional dialing mode in which the interviewer places the call and waits for an outcome to code (e.g., ring no answer, answering machine, some form of contact with the household, etc.). This technology is being implemented due to the large volume of non-contact outcomes expected from the initial screening effort on the landline sample. Due to federal restrictions, this technology will not be employed for the cell phone sample.


To facilitate internal communication among distributed staff, telephone data collection staff at all levels (interviewers, supervisors, and manager) are connected at all times over an instant messaging system, allowing for quick reporting of any problem situations and triaging of these problems to the staff best suited to resolve them. In addition, a project-specific Sharepoint site will be created to allow for regular updates to be posted regarding study progress, specific data collection issues, or any other type of study announcement so that all interviewers, supervisors and managers can be kept up-to-date.


Finally, to support monitoring and quality analyses, we will record all telephone interviews for which respondents agree to be recorded.



A4. Efforts to Identify Duplication

The Centers for Disease Control (CDC) currently collects data on sexual violence through the National Intimate Partner and Sexual Violence Survey (NISVS). This survey uses the public health methodology discussed in section A.1. These data concentrate on sexual violence, not necessarily violence that would count as a crime. It also uses a methodology, random digit dial (RDD) telephone interviewing, which yields coverage and response rates that are quite a bit below the NCVS. The goal of the present study is to assess how this type of methodology compares with one that is more consistent with the goals of the NCVS, which is to count and classify events into specific crime categories.


The public health surveys have used answers to behavior-specific screening questions to classify the event (e.g., rape, sexual violence), without any detailed follow-up. Using a two-stage methodology similar to the NCVS, the current study will use behavior-specific screening items similar to those employed by the public health approach. It will also administer a detailed incident form to classify and count the number of events. For example, it is relatively common for incidents reported on the NCVS screener to shift in classification or to be determined to not be a crime when followed up with more detailed questions. The present study will use the follow-up questions to categorize the events into the appropriate crime categories and to count these events for purposes of estimating rates of rape and other sexual assault.


It is also expected that the two surveys (RDD and ACASI) will achieve different response rates. A comparison of the profiles of the respondents of the two surveys will provide another measure of how the two methodologies differ. For example, it might be the case that the ACASI survey will reach lower income and younger individuals who are at greater risk of sexual assault. The extent this is true can be assessed in the comparison of the characteristics of respondents from each method.


For more details on the analysis plans, see Section B.2e.



A5. Efforts to Minimize Burden

This project will minimize burden on respondents in two ways. First, all materials that are provided to the respondent have been designed to be easy to use and to read. The written materials (e.g., advance letters) have been written to be as short and direct as possible. For the main survey interview, a series of cognitive interviews have been completed which tested whether the questions were easily understood by respondents and could be answered with a minimal amount of effort.


Second, the interview uses a two-stage methodology to reduce burden. The first stage asks if the respondent has been victimized and the second stage follows up those respondents that report a victimization and asks about the details of the specific incident. To minimize burden, the survey will only follow-up with the detailed questions for those reporting an incident as occurring within the previous 12 months. Those reporting incidents that occurred more than 12 months in the past will not be asked the additional questions. In a related decision to reduce burden, the number of incidents for which there are follow-up questions is limited to 3. Those that report 4 or more incidents within the last 12 months will be asked to report the details for the 3 most recent incidents.



A6. Consequences of Less Frequent Collection

This is a one-time collection. The results will be used to make recommendations for whether the proposed methodology should be used as part of the ongoing NCVS and the frequency with which it should be used.



A7. Special Circumstances Influencing Collection

These data will be collected in a manner consistent with the guidelines in 5 CFR 1320.6.



A8. Federal Register Publication and Outside Consultation

The research under this clearance is consistent with the guidelines in 5 CFR 1320.6. The 60 and 30-day notices for public commentary will be published in the Federal Register.


As noted in section A.1, BJS charged an expert panel from the National Research Council's Committee on National Statistics (CNSTAT) to examine conceptual and methodological issues surrounding survey statistics on rape and sexual assault. As part of this review, BJS and Westat have provided the panel with updates on the evolution of the design of the study, as well as the instruments as they were being developed. Individuals on the panel provided feedback at each of the three presentations. BJS and Westat have attended three separate meetings with the NAS panel, presenting the design and discussing issues (December 8, 2011; June 5 & 6, 2012; December 10, 2012). In addition, BJS and Westat have responded to questions by the panel.


The members of the panel are listed below:


Dr. William D. Kalsbeek - (Co-Chair)
The University of North Carolina at Chapel Hill

(919) 962-3249
bill_kalsbeek@unc.edu


Dr. Candace Kruttschnitt - (Co-Chair)
University of Toronto

(416) 978-8487

c.kruttschnitt@utoronto.ca


Dr. Paul P. Biemer
RTI International

(919) 541-6056

ppb@rti.org


Dr. John Boyle
Abt SRBI, Inc.

(301) 608-3883

j.boyle@srbi.com


Dr. Bonnie Fisher
University of Cincinnati

(513) 556-5826

Bonnie.Fisher@uc.edu


Dr. Karen Heimer
The University of Iowa

(319) 335-2498

karen-heimer@uiowa.edu


Dr. Linda Ledray
Sexual Assault Resource Service

(612) 347-0910

Linda@sane-sart.com


Dr. Colin Loftin
State University of New York at Albany

(518) 442-5216

cloftin@albany.edu


Dr. Ruth D. Peterson
The Ohio State University

(614) 688-4930

peterson.5@sociology.osu.edu


Dr. Nora Cate Schaeffer
University of Wisconsin-Madison

(608) 262-3868

schaeffer@ssc.wisc.edu


Dr. Tom Smith
The University of Chicago

(773) 256-6288

smitht@norc.uchicago.edu


Dr. Bruce D. Spencer
Northwestern University

(847) 491-5810

bspencer@northwestern.edu



Other invited attendees of the meetings where the design was presented:


Dr. Janet Lauritsen

University of Missouri, St. Louis

(314) 516-5427

janet_lauritsen@umsl.edu


Dr. Kenneth Rasinski

Department of Medicine

University of Chicago

(773) 834-6837

kennethr@uchicago.edu


Ms. Carol E. Tracy

Women’s Law Project

(215) 928-9801


Ms. Terry L. Fromson

Women’s Law Project

(215) 928-9801


Ms. Jennifer Gentile Long

AEquitas

(202) 558-0029

JLong@AEquitasResource.org



Ms. Charlene Whitman

AEquitas

(202) 499-0314

cwhitman@aequitasresource.org


Dr. Ronet Bachman

University of Delaware

(302) 831-3267

Ronet@udel.edu


Dr. Lynn Addington

American University

(202) 885-2902

adding@american.edu


Mr. Scott Berkowitz

President and Founder, RAINN (Rape, Abuse, and Incest National Network)

(202) 544-3064


A9. Payment or Gift to Respondents

As has been documented elsewhere (e.g., Brick and Williams, 2013; Curtin, et al., 2005), it is increasingly difficult to achieve high response rates in surveys. In some instances, incentives have been found to be cost neutral as the price of the incentive is offset by the reduction in field time and contact attempts necessary to garner participation (Research Triangle Institute, 2002). The proposed survey covers topics that are particularly sensitive, which adds significant burden to the interview. Incentives are a reliable way to recognize this burden and increase the overall quality of the survey by maximizing the response rate and increasing the efficiency of the survey operations.


Maximizing statistical power and coverage will be critical for the project since less than 5 percent of the respondents are expected to report a rape or sexual assault in the past 12 months. Young people, who consistently exhibit high nonresponse rates in household surveys, are at higher risk group for rape and sexual assault, which makes the risk of nonresponse bias relatively high for the critical estimates of this research. Several studies have found that incentives are particularly effective for minority and low income groups (Singer, 2002). These groups are also subject to higher risk of rape and sexual assault.


An important goal of the feasibility and pilot studies is to inform design decisions that will be implemented in the National Crime Victimization Survey (NCVS). Making comparisons of the two modes that are to be tested in the pilot study will be dependent on achieving high response rates in both telephone and in-person designs. Maximizing response rates will also be necessary for comparing the results of these studies to the current NCVS, which has response rates in the high 80s. Maximizing response rates for the proposed study will reduce the extent that observed differences with the NCVS are due to nonresponse bias.


The incentives vary by survey mode and sample type. Table 1 below shows the amount and number of recipients for each incentive. Following the table are descriptions of each incentive and the rationale for their use.



Table 1. Incentive Structure for Feasibility and Pilot Studies

Mode

Field

Phone

Incentive Structure

Amt

Feasibility

Pilot

Amt

Feasibility

Pilot

Number of Recipients

Number of Recipients

Probability Sample

Mail Household Roster

$2

200

33,072

 


 

Main Interview

$20

40

7,500

$20

40

8,000

Re-Interview

$20

5

350

$20

5

350

High Risk Sample

Main Interview

$30

40

1,000

$30

40

1,000

Re-Interview

$30

5

150

$30

5

150

Service Provider Sample

Main Interview

$30

20

300

$30

20

300

Travel Offset

$10

20

300

 


 



$2 Pre-Paid incentive for the mail survey asking for a household roster

Households selected as part of the ABS sample will be sent a mail survey asking them to complete a household roster (Attachments B-C). If returned, this roster will be used to determine whether an eligible female is living at the sampled address. If there is no eligible female, the household will be coded as ineligible. If there is an eligible female, a field representative will attempt to complete an interview using the ACASI.


We are proposing a pre-paid incentive of $2 for the return of the mail roster. An incentive is particularly cost-effective at this stage because it potentially saves the cost of having an interviewer visit the household. Our calculations indicate that if a $2 incentive increased response rates by five percentage points, the study would experience lower net costs because of the reduction in field labor.


The empirical evidence for mail surveys is that a small pre-paid incentive will significantly increase the response rate (Church, 1993; Trussell and Lavrakas, 2004). We are proposing a $2 incentive because this has been found to have a significantly greater effect than $1 (Trussell and Lavrakas, 2004).


If the household is determined to be eligible for the study, a letter stating that an interviewer will be visiting the household soon to conduct the study (Attachment D).


$20 Promised Incentive for RDD and ABS Respondents

All telephone and in-person respondents who are part of the probability sample and who complete the main interview will be provided $20 in appreciation for their completion of the survey. An incentive is necessary for two reasons. First, promised incentives have been found to improve response rates when they are large enough (Cantor, O’Hare, & O’Connor, 2008). The proposed incentive of $20 falls within the range of incentives that have been found to be effective.


For telephone interviews, several studies have found that promising less than $20 for a telephone interview has not consistently increased response rates. For example, Cantor, Schiffrin, Park and Hesse (2006) experimented with promised incentives of $0, $5, and $15 in a telephone survey and found no difference in response rates between $5 promised and no incentive. Those offered $15 had a 6 point higher response rate than the two lower incentive groups prior to refusal conversion. However, the effects were not significant after refusal conversion. Similarly, Strouse and Hall (1997) experimented with promised incentives and found only “marginally increased screener cooperation rates” when promising $10 relative to no incentive, and they found no improvement in response when promising $5 relative to no incentive in an RDD health study. However, in another experiment testing higher promised amounts ($15, $25, $35), Strouse and Hall (1997) found a significant positive effect of promised incentives relative to no incentive. In that experiment, promising $25 yielded about 9 percentage points higher response than promising no incentive at all (48.1 vs. 39.2%).


The National Intimate Partner and Sexual Violence Survey (NISVS), which asks similar questions to the NSHS victimization screener, promises $10 to initial cooperators, but $40 to a sample of refusers. The NISVS design acknowledges that it is necessary to use more than $10 to obtain an acceptable response rate. Our recommended design uses a uniform amount, rather than basing the amount on the difficulty associated with completing an interview. Given the sensitive nature of the survey, the NSHS survey is not conducting any refusal conversion. Consequently, even if differential incentives were desirable, they could not be implemented.

Table 2. Incentive and Burden on Selected Federally Sponsored Surveys

Survey

Task

Average Length

Effort

Sensitivity

Panel

Incentive

National Survey on Health and Safety

Auto-biographical questions on sexual assault

19 minutes

Average

High; private information; explicit language; potential of emotional trauma and retaliation

No

$20

Program for International Assessment of Adult Competencies

Educational Assessments

2 hours

Average

Average

No

$50

National Epidemiologic Survey on Alcohol and Related Conditions

Auto-biographical questions on alcohol use; provide biological samples

2 hours

Average

Above Average

High risk behaviors

Yes

$90 for interview

$

National Health and Nutrition Survey

Autobiographical Questions on health and physical examination

60 minutes for household interview plus time for exam

High – travel for exam; physical intrusion

Above Average; HIVs; questions on drug use

No

$90 - $125 interview, exam

Travel reimbursement

$30 - $50 per phone interview, activity monitor, urine

National Longitudinal Survey of Youth

Autobiographical questions on labor market activities and other life events

65 minutes

Average

Average

Yes

$40

National Children’s Study

Autobiographical questions on child development

45 minutes

Average

Above Average

Personal questions

Yes

$25

National Health and Aging Trends Study

Autobiographical questions on health and aging

105 minutes

Average

Average

Yes

$40

Population Assessment of Tobacco and Health Study

Autobiographical questions on tobacco use and health; provide saliva sample

45 minutes

Average

Above Average

Risk behaviors

Yes

$35 for interview

$10 - $25 per parent interview, bio collection

Medical Expenditure Survey (MEPS)

Autobiographical questions on health expenditures

60 minutes

Above Average; records

Above Average

Expenditures and income

Yes

$50

National Survey on Drug Use and Health

Autobiographical questions on drug use

60 minutes

Average

Above Average

Illegal behavior

No

$30

ADD Health

Autobiographical questions on health and health related behaviors

90 minutes

Average

Above Average

Illegal behavior

Yes

$40 for latest wave

Effort = Rated as average unless it requires travel, physical procedures or keeping records; Sensitivity – Average unless involves asking about sensitive behaviors (e.g. illegal or high risk) and/or topics that are potentially traumatic experiences; use of explicit language

The second reason to propose an incentive is related to the sensitive nature of the survey. To provide a more explicit comparison to other in-person surveys, Table 2 provides information for several federally sponsored in-person surveys along dimensions that define survey burden (Bradburn, 1978; Singer, et al, 1999). In the table, burden is defined by the average time to complete the interview, the effort needed to complete the required tasks, the sensitivity of the questions and whether the survey is longitudinal. The footnote in the table provides a key on how the surveys were rated for the ‘effort’ and ‘sensitivity’ dimensions. We have rated the NSHS as being the most sensitive among these surveys for several reasons. One is the extremely private nature of the topic. This sensitivity leads to a design which does not reveal the specific topic of the survey until the respondent is selected and in a private setting. This is unlike any of the other surveys on sensitive topics such as drug use, use of alcohol or asking about income. Following the practice of other surveys on intimate partner violence, this procedure fosters confidentiality of the topic of the survey within the household. This promotes candid reporting, as well as preventing possible retaliation from other household members. However, revealing the specific topic at this point of the interview introduces additional burden related to the sensitivity of the survey.


Second, the questions have the potential of bringing up negative emotions or feelings. Research on interviews of this type has shown generally that victims of sexual violence find these interviews as a positive experience (Labott et al, 2013; Walker et al, 1997). Nonetheless, they can bring up negative emotions. This aspect of the survey is not unique to other surveys on intimate partner violence, but it is unique among the surveys listed in Table 2.


A third reason the NSHS survey is rated highest on sensitivity is the use of a detailed incident form (DIF). While a relatively small number of respondents will fill out a DIF, this portion of the survey adds burden beyond what similar surveys have done. With one exception (Fisher, 2004), the surveys on intimate partner violence have avoided asking for details because it can be very sensitive. An important goal of the NSHS is to assess the utility of the DIF for purposes of classifying and counting the number of incidents. This is something other surveys have not been able to do cleanly (see response to analysis question). The DIF includes questions on such topics as the type of force that might have been used, the extent alcohol/drugs were involved and how the victim reacted to the situation. This adds significant burden to the task.


A promised incentive plays an important role in motivating respondents to complete the survey. Recent research testing an Interactive Voice Response version of the NCVS found that promising $10 increased the number of respondents who filled out a victimization screener, as well as completing all of the expected DIFs. In the case of filling out all DIFs, these results found that 30% of respondents did not complete all DIFs without an incentive, while this dropped to 20% for those that received an incentive (Cantor and Williams, 2013). This effect was directly related to the difficulty of the respondent’s task. As noted above, we will not be conducting any refusal conversion once the respondent has been informed about the topic of the survey. The incentive levels for both ABS and RDD seek to maximize the extent respondents consider participating and completing the survey.


Our proposal of $20 for the in-person survey assumes the overall burden for NSHS is comparable to longer surveys such as those listed in Table 2. Nonetheless, the average length of the interview is shorter than the comparable surveys shown in Table 2. We believe an incentive of $20, equivalent to the proposed telephone version and to the NISVS (see rationale for telephone interview above) is warranted.


$30 Promised Incentive for High Risk

The high-risk sample is composed of women aged 18 to 39 who will be asked to volunteer to take part in the study. Women between ages 18 and 29 will be oversampled. The purpose of this sample is to supplement the general population sample by interviewing respondents who are at elevated risks of rape and sexual assault (see Table 4 in part B). Once recruited, the individual will be randomly assigned to either the ACASI or telephone mode of administration. The methods used to recruit these individuals will be similar to those used for cognitive interviews and focus groups. We will recruit by asking for volunteers by distributing flyers through colleges and universities as well as through online sources such as Craigslist. We will be trying to recruit enough to yield approximately 2,000 interviews.


When asking for volunteers of hard-to-reach groups, it is particularly important to offer enough money to attract a wide array of potential respondents. Proposing an incentive of $30 is based on our experience with recruiting participants for cognitive interviews and focus groups, which rely on similar recruitment methods. For example, the RSA project received OMB approval to offer $40 for the cognitive interviews. These individuals were recruited in identical ways as being proposed for the RSA Pilot and Feasibility Surveys. It is important to set the level of the incentive high enough to get the respondent’s attention and to consider volunteering. Setting the incentive lower than $30 will make it more difficult to reach the ambitious goal of completing 2000 interviews with this group.


The purpose of including this sample group in the RSA study is to interview women who are at the highest risk of rape and sexual assault. This will significantly enhance the analysis that compares the instruments. To maximize the effectiveness of these interviews, we will be over-recruiting women age 18-29. This group is at the greatest risk of rape and sexual assault. However, women in these young age groups are also particularly difficult to recruit. Offering $40 during the RSA cognitive interviews proved to be very effective in recruiting women in this age range. For this reason we are proposing $30 to catch the attention of potential volunteers, while at the same time recognizing that the NSHS interview is not as long as the cognitive interviews completed earlier and is conducted in the convenience of the home or by telephone, requiring no travel time or cost.


We are proposing the same incentive for both those assigned to the ACASI and telephone conditions to maintain the integrity of the random assignment. If more money was offered to one of these groups, the equivalence of the groups would be compromised.


$30 Promised Incentive for Service Provider Sample and $10 travel reimbursement

The Service Provider sample will be composed of women who have experienced rape or sexual assault within the past 12 months. They will be recruited through local rape and sexual assault victim support agencies. In all cases, the recruitment process will consist of someone within the agency distributing flyers that ask for volunteers to participate on the study.


We are proposing a $30 incentive for this group for the same reason as the High Risk group. This amount is necessary in order to sufficiently motivate them to read the flyer and consider volunteering. This group will also be asked if they want to conduct the interview at the service provider’s location or at a place where they can guarantee they can speak confidentially and safely. We are making these special arrangements for this group because of the serious nature of their experiences. If the respondent does travel to do the interview, we propose providing $10 to offset some of the travel costs they may incur.


Same Amounts for Re-Interviews

Approximately 1,000 individuals will be sampled for a re-interview from the general population and High Risk groups. Those reporting a victimization at the initial interview will be oversampled. The re-interviews will be completed in the same mode as the original interview. Half will come from the in-person group and half will be from the telephone group.


Given the similarities in procedures between the original and re-interview, we are proposing to offer an incentive for the re-interview that is equivalent to the initial interview.


A10. Assurance of Confidentiality

All respondents who participate in the survey in-person using ACASI will be given assurance that the identity of all participants, victims, and perpetrators will be protected as required under Title 42, United States Code, Section 3732 (Attachments S and T). All respondents who participate in the survey using CATI will be verbally presented with this information (Attachments S and T). BJS and Westat hold in confidence any information that could identify an individual according to Title 42, United States Code, Sections 3735 and 3789g. Rates of sexual violence will be published, as required under the Act.


The advance material is written to not reveal the specific purpose of the interview until a respondent is selected. This is to protect the confidentiality of the topic of the interview from a perpetrator who lives in the household.


All interviews will be conducted in a private area. Names and other personal identifiers will not be linked to the questionnaire data, such that if someone were to somehow obtain the survey data, they could not associate any data with a particular individual. ACASI provides a private setting so that only the respondent can see the answers on the screen. In contrast, the respondents must speak their answers verbally to the questions using CATI. To the extent possible, questions administered using CATI require answers of “yes” or “no” to prevent others within hearing distance from understanding the content of the questions. As required under Title 42 USC, section 3879g, BJS and its data collection agents will take all necessary steps to mask the identity of survey respondents, including suppression of demographic characteristics and other potentially identifying information, especially in situations in which cell sizes are small.


The procedures proposed for this study have been approved by Westat’s IRB (Attachment N).



A11. Justification for Sensitive Questions

Collection of data on rape and sexual assault requires asking sensitive questions. The research cited above has found that surveys that ask behaviorally specific questions have led to a higher number of reports of rape and sexual assault. One explanation for this result is that these cues are particularly effective at defining the types of behaviors that are of interest. Behaviorally specific questions follow generally accepted survey practice of being as specific as possible. This reduces possible confusion over respondent interpretation of words such ‘sex’ or ‘rape’ and thereby promotes the respondent’s ability to search memory for events that fall within the scope of the survey. For example, Fisher (2004) found a significantly higher proportion of rape victims on the National College Women Sexual Victimization Study, which used behaviorally specific cues, compared to the National College Women Sexual Victimization Study, which used a format similar to the NCVS.


The instruments first ask a series of screening questions to identify women who have had any type of unwanted sexual contact (e.g., manual, oral, penetrative, and “other” contact). As noted above, these are behaviorally specific, drawn from prior surveys (e.g., Black et al., 2011; Tjaden and Thoennes, 2000). These questions ask about contact by both males and females, with and without the use of force. For each affirmative response to a screening question, the respondent is asked follow-up questions in a detailed incident form. The detailed incident form provides a description of the nature of the event and will be used to classify the incident as a crime and into a particular category.


Asking multiple questions to identify unwanted sexual contact serves two main purposes. First, it reduces problems associated with asking a single global binary (yes/no) question which would leave the instrument with limited ability to define the nature and circumstances of the event. One finding from the cognitive interviews was that regardless of how specific the screening questions are, some respondents may still interpret the question in ways not consistent with the original intent. The purpose of the detailed incident form is to collect more specifics about the event so the survey can classify the event into the appropriate category.


Second, the approach recognizes the complexity of the definition of rape and sexual assault. The literature in this area notes that sexual assault occurs on a continuum from unwanted contact without the use of physical force to serious physical violence. The nonviolent kinds of victimization may be easily overlooked as consensual unless the general (i.e., sexual contact) to specific (i.e., unwanted, coerced, pressured, or forced sexual activity) approach is utilized. For example, survey items that limit sexual victimization by defining it as penetration and/or forced physical contact exclude many victims that experience lesser unwanted sexual contact. These include, but are not limited to, incidents of uninvited genital exposure, undue pressure to engage in sexual activity although unable to fully consent (e.g. intoxicated), coercive techniques such as making threats (e.g. against a loved one), and sexual contact by a stranger (e.g. having genitals or breasts grabbed unexpectedly). The use of multiple questions is intended to capture all unwanted sexual activity regardless of the use of physical force.


BJS has implemented several safeguards to mitigate situations where a respondent might become upset by the content of the survey. All procedures have been reviewed and approved by Westat’s IRB (Attachment N) under federally recognized human subject protections (45 CFR 46). Part B of this package provides the procedures that are in place, including the language in the informed consent and procedures to minimize risk and protect the confidentiality of the interview.


All interviewers will be trained to monitor for women who might become upset or agitated while taking the interview. The training is based on our experiences conducting the cognitive interviews for this project, which included input from members of the Rape, Abuse, and Incest National Network (RAINN). As part of training, interviewers will learn how to identify and respond to signs of physical distress (e.g. crying, shaking, etc.) and crisis management specific to sexual victimization populations. At the start of each survey, interviewers will instruct the respondent to skip any question they do not want to answer, and that they can stop the interview at any time they feel uncomfortable or wish to stop. For respondents that are visibly upset (in person) or verbally upset (telephone), the interviewer will be instructed to check in with the respondent by stopping the interview and asking if they are able to continue the interview. If the respondent chooses to stop the interview, the interviewer will provide her with a list of resources, which include the National RAINN hotline, a local RAINN affiliate specific to the CBSA, and other crisis lines such as suicide and domestic violence resources (Attachment U). In extreme cases of respondent distress, where the respondent becomes non-responsive or exhibits other types of behaviors (e.g., suicidal ideation), all interviewers will be instructed to connect the respondent with the appropriate resource (e.g., crisis counselor at RAINN; suicide hotline). Based on prior studies that have used similar questions, we anticipate that these events will be extremely rare. Previous research shows that many rape survivors are motivated to participate in research about sexual victimization (Campbell & Adams, 2009) and are generally not upset by the explicit nature of the survey questions (Black et al, 2006).


A list of resources will be provided to all respondents, regardless of whether they show any visible signs of emotional distress. These resources can then be used if the respondent feels the need to contact someone about any feelings that emerge after the interview.



A12. Estimate of Hour Burden

We request a total of 11,806 hours (agency staff: 310 and respondents: 11,496). The total respondent burden, including both agency staff and respondents, are summarized in Tables 3, 4 and 5 below. We anticipate that agencies may be engaged in the following activities: communicating with potential respondents about the survey and, in some situations, arranging for space in which the survey can be conducted. The total estimated agency staff burden (Table 3) for these activities is 310 hours (crisis centers: 210 hours, universities/ colleges: 100 hours).


Table 3. Annual Agency Burden for the NSHS Interviews

Description

Average burden hours per response

Average number of

responses+

Total expected burden hours

Total expected burden cost

Feasibility Test


Service Providers (Crisis Centers)

0.733

25

18.3

$406


High Risk Sample (Universities / Colleges)

0.833

20

16.7

$331

Pilot Test






Seeded Sample (Crisis Centers)

0.590

325

191.7

$4,245


High Risk Sample (Universities / Colleges)

0.833

100

83.3

$1,654

Total Agency Burden

310

$6,637

+Rounded to the nearest integer


Our burden estimates for respondents comprise multiple activities with varied durations. These activities apply to different numbers of respondents. Table 4 lists the activities, estimated duration times and the number of respondents associated with each activity. We anticipate that respondents may be engaged in the following activities: reading flyers and pre-notification, recruitment and conversion letters, contacting Westat by web or phone to volunteer for participation, completing the household roster screener survey by mail, by phone, or in person, receiving post-survey letter and incentive, completing the respondent survey by phone or in person, and completing the re-interview survey by phone or in person.


For example, it is expected that 72 individuals will read the flyer and/or respond via web/phone among the high risk volunteers and will spend .067, or about 4 minutes, responding. The average time to complete the survey will vary by sample type because of the anticipated number of respondents expected to report an eligible victimization. For the General Population, an average of approximately 17 minutes per respondent is anticipated (.289 x 60 minutes). This is based on assumptions related to the number of eligible events reported by this group. For the Service Provider sample, where virtually all respondents are anticipated to report at least one incident, it is estimated that the average will be about 34 minutes (.565 x 60 minutes). Estimates of the length of the victimization screener and detailed incident forms are based on dry runs conducted in-house and the cognitive interviews.



Table 4. Respondent activities, times, and respondents for the NSHS Interviews


Task Activity

Sample Type

Time to complete

# of Feasibility

Study Respondents

# of Pilot

Test Respondents

CATI Sample


Read flyer/respond via web or phone

Service Provider

High Risk

0.067

72

1,566


Read letters (pre-notification, conversion, extended conversion)

RDD Landline

0.017

483

51,532


Read post-survey letter and receive incentive

Service Provider

High Risk,

RDD Landline/Cell

0.017

110

9,800


Respondent selection

RDD Landline

0.083

34

6,772


RDD Cell

0.050

29

5,715


Complete survey by phone

Service Provider

0.565

20

300


High Risk

0.307

40

1,000


RDD Landline/Cell

0.289

40

8,000


Complete reinterview survey by phone

High Risk

0.307

5

350


RDD Landline/Cell

0.289

5

150


ACASI Sample


Read flyer/respond via web or phone

Service Provider

High Risk

0.067

72

1,566


Read letters (1st/2nd advance, post-survey)

General Population

0.017

394

68,830


Complete hhld roster by mail

0.083

46

7,359


Complete hhld roster in person

0.117

70

13,245


Complete survey in person

Service Provider

0.598

20

300


High Risk

0.341

40

1,000


General Population

0.324

40

7,500


Complete reinterview survey in person

High Risk

0.341

5

150


RDD Landline/Cell

0.324

5

350


Total Number Filling Out a Survey = 52,390+


+ = Includes all survey responses. Does not include number reading communication material

There are a total of 52,390 individual responses to the different surveys. A ‘response’ is defined as filling out a survey. It excludes those just reading communication material (e.g., letters). When added to the Agency respondents, this is a total of 53,060 respondents to the collection.


Table 5 provides the estimated burden and cost for the two tests (feasibility, pilot), mode of interview (CATI, ACASI) and sample type (Service Provider, High Risk, General). Attachment A provides the detailed calculations for these estimates. With an annual burden of 11,806 (Table 5), the average burden per respondent is 13.4 minutes (11,806/53,060 = .223 hours or 13.4 minutes).

Table 5. Annual Respondent Burden for the NSHS Interviews

Description

Total expected burden hours*

Total expected burden cost

Feasibility Test




CATI Sample (Total)

57.1

$1,370




Service Provider Sample

13.2

$317




High Risk Sample

17.8

$427




General Population Sample

26.1

$626


ACASI Sample (Total)

65.6

$1,573




Service Provider Sample

13.6

$326




High Risk Sample

18.5

$444




General Population Sample

33.2

$796

Pilot Test




CATI Sample (Total)

4,908.7

$117,710




Service Provider Sample

198.6

$4,762




High Risk Sample

452.7

$10,856




General Population Sample

4,257.4

$102,092


ACASI Sample (Total)

6,523.4

$156,431




Service Provider Sample

203.6

$4,882




High Risk Sample

471.9

$11,316




General Population Sample

5,789.1

$138,823

Total Respondent Burden

11,496

$275,665

Total Burden (Agency plus Respondent)

11,806

$282,302

+Rounded to the nearest integer

*Rounded to the nearest tenth of a percent



A13. Estimate of Agency and Respondent Cost Burden

The total respondent cost to the crisis center agencies and universities/colleges includes the staff time needed to complete the tasks and is described in Section A12.


At an estimate of $22.15 per hour3 for 210 hours, the estimated crisis center staff cost burden for the entire national survey is $4,652. At an estimate of $19.85 per hour4 for 100 hours, the estimated university/college staff cost burden for the entire national survey is $1,985.


There are no costs to women other than those associated with the time used to complete the survey.

The expected respondent burden cost associated with the estimated hours $277,084 and is based on the average hourly earnings of $23.98 hour5 for private nonfarm payrolls.



A14. Estimated Cost to Federal Government

The total estimated cost to the government for survey development and implementation is $10,002,829. This consists of two components:


  1. Costs associated with the cooperative agreement between BJS and Westat


  • Survey planning and management; methodological, instrument,
    systems, and survey operations development and design: $ 2,795,444

  • Data collection (feasibility test, pilot test), quality control, data processing: $ 6,565,580

  • Data analysis, delivery and project summary reporting: $ 547,207

$9,908,231


  1. Costs associated with BJS contract oversight and study activities are estimated to be $94,598


  • 20% percent of GS-13, Statistician ($20,200)

  • 10% percent of SL, Senior Statistical Advisor ($15,900)

  • Benefits (@28% - $46,185)

  • Other administrative costs @15% ($12,313)

BJS costs are expected to remain stable, subject to Cost of Living Adjustments (COLA).



A15. Reasons for Change in Burden

Not applicable



A16. Plans for Publication

The will produce a number of reports, presentations and papers that will be available to interested groups:

1. Cognitive interview reports. These reports will be a sanitized version6 of the full reports submitted to BJS describing the results of the cognitive interviews. These will be posted in April of 2014, after all testing is completed. The reports highlight key issues related to respondent interpretation of commonly used behavior specific questions and the types of events elicited.


2. Presentations at the Annual Meeting of the American Association for Public Opinion Research, May, 2014. These two papers, co-authored by the BJS and the Westat team, will provide results from the cognitive interviews. One is a qualitative analysis of the use of behavior-specific questions to identify, count and classify self-reports as rape and sexual assault. The second paper examines the use of a re-interview to evaluate survey questions.


These will be written up as papers and submitted for publication to refereed journals.


3. Final Report. This will cover the analysis issues discussed in section B2e in part B below. This is scheduled to be submitted to BJS by December 2015. This will be posted to the BJS website once accepted as a final deliverable. The final report will describe the empirical differences between the two different methodologies (telephone vs. ACASI) with respect to coverage and response rates, incident rates and the various measures of data quality. It will also compare general estimates from the ongoing NCVS. It will conclude with a set of recommendations on how BJS should incorporate new measures into the ongoing NCVS series.


4. Summary of Findings and recommendations. This will be a shortened, non-technical summary of the findings and recommendations of the final report. This will be completed shortly after the Final Report.


5. Other papers and presentations. Based on the analysis of the final report, the BJS and Westat team will present papers at relevant professional conferences such as the American Society of Criminology, the American Association for Public Opinion Research and the Joint Statistical Meetings. Target journals will span both survey methods and criminological areas. These will cover topics that inform the larger community on research results from the report. Examples of the topics that will be covered include: 1) what are the differences between ACASI and Telephone surveys for measuring RSA?, 2) What are the characteristics of incidents elicited by behavioral specific questions and how do they compare to the NCVS?, 3) What are advantages and disadvantages of a one vs. two-stage approach to data collection?, 4) What are the cost/benefits of a telephone vs. ACASI design vs. in-person interviewer design?


BJS considers the publication of these results as a fulfillment of its core mission. BJS has invested significant resources in the redesign of the NCVS to improve methodology and increase the survey’s value to national and local stakeholders. A section of the BJS website is dedicated to providing information to the public regarding ongoing methodological research in support of the NCVS.


Publication of the NSHS findings will adhere to the standard procedures established and refined by BJS over the last 35 years. As a statistical agency with extensive experience processing and disseminating potentially sensitive information, internal reviews have been developed to insure all statistical research is released in a manner maintaining anonymity and confidentiality as appropriate. Once internal review of the report is complete, we expect to release the findings on the BJS webpage.


A17. Expiration Date Approval

The OMB Control Number and the expiration date will be published on all forms given to respondents.



A18. Exceptions to the Certification Statement

There are no exceptions to the Certification Statement.

1 The NISVS rate for adult females is 1.1 and the NCVS rate for rape and sexual assault among females age 12+ is .13. While the NCVS rate includes girls under 18, this age group has the highest rape/sexual assault victimization rates. This comparison, therefore, underestimates the differences between the two surveys.

2 The major exception to this is the British Crime Survey (Hall and Smith, 2011).

3 May 2012 National occupational employment and wage estimate for counselors (all other) (Source: http://www.bls.gov/oes/current/oes211019.htm).

4 May 2012 National occupational employment and wage estimate for all education, training, and library workers (Source: http://www.bls.gov/oes/current/oes259099.htm).

5July 2013 Economic news release national occupational employment and wage estimate for average hourly earnings of all employees on private nonfarm payrolls by industry sector, seasonally adjusted (Source: http://www.bls.gov/web/empsit/ceseesummary.htm).

6 “Sanitize” refers to deleting any qualitative descriptions that might identify an individual respondent to the cognitive interviews.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDarby Steiger
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy