N-jov4 Supporting Statement B

N-JOV4 SUPPORTING STATEMENT B.docx

Fourth National Juvenile Online Victimization Study (N-JOV4)

OMB: 1121-0374

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT B

Fourth National Juvenile Online Victimization Study (N-JOV4)



  1. Respondent Universe and Sampling Methods



The Fourth National Juvenile Online Victimization Study (N-JOV4) will include one pilot study to assure that recruitment procedures work, that instructions are clear and that the interview and data collection instrument work as intended; and a full survey administration designed to provide national estimates of technology facilitated sex crimes against children.

    1. N-JOV4 Pilot Study

The Pilot phase of the study will consist of two components: 1) reviewing the mail survey screener and case identification process with four ICAC agencies and 20 smaller, non-ICAC affiliated agencies, and 2) testing the case information follow-up telephone interview with five cases in 2018 (outside of our timeframe for the national survey). A large pilot of the study methodology is not needed because so much of the methodology replicates procedures that were previously very successful. First, the mail screener is a methodology that has been demonstrated effective in the past. This method yielded high response rates of 86-88%.1 Moreover, the design principles of the methodology proposed continue to conform to best practices within the survey industry for mail surveys.2 There is no reason to believe the mail survey, with its repeated follow-up components by mail and telephone, will not be successful today.

The pilot study will address two main questions that may allow possible improvements in the already successful design: 1) Are there key words and search strategies that will assist agencies in locating relevant cases? We will pilot the mail survey with a mixture of agency types to refine the answers to this question. 2) In the investigator interview, are the questions and prompts clear and easy to understand and do they include the appropriate answer categories.

    1. N-JOV4 National Study

The sampling approach for the N-JOV4 National Study will use a similar design to that utilized in the three prior N-JOV studies.

Considerations for the Sample Design. The sample for the initial N-JOV study consisted of a stratified national sample of 2,500 state, county and local LEAs. Federal agencies were queried separately, including the FBI, Customs and Postal, which have units that investigate technology-facilitated child sexual exploitation crimes. This sample was divided into three frames based on agency specialization. The 1st frame consisted of the ICAC Task Forces and other LEAs known to specialize in technology-facilitated child sexual exploitation crimes, including federal agencies. These agencies were not sampled; they were included with certainty. The 2nd frame consisted of LEAs with staff known to be trained in the investigation of technology-facilitated child sexual exploitation crimes at the time of the first N-JOV. These agencies were identified from lists of agencies participating in training programs, with half of such agencies randomly selected to participate in the study. The 3rd frame consisted of all other local, county and state LEAs in the U.S., over 15,000 agencies. Twelve percent of third frame agencies were randomly selected to participate in the first N-JOV study. N-JOV2 and N-JOV3 surveyed the same LEAs with slight adjustments over time given the movement of some agencies between frames (e.g., an agency becoming an ICAC Task Force).

As a result of a series of meetings between UNH, Westat and NIJ, we have decided to draw a new, independent sample of agencies for N-JOV4. The new sample will have a similar structure as prior N-JOVs, namely the inclusion of ICAC Task Forces and federal agencies with certainty (1st frame), a random selection of ICAC affiliate agencies who have some degree of training and experience with eligible crimes (2nd frame), and a random sample of all other local, county and state law enforcement agencies across the U.S. There were several key factors for not going back and surveying the same law enforcement agencies as we have in past N-JOVs:


  • The structure of the ICAC Task Force Program experienced a substantial shift between 2010 and 2020 from 61 task forces and 38 satellite agencies in 2010 to 61 Task Forces and over 4,500 affiliate agencies in 2020. Affiliate agencies have signed MOUs with their host ICAC to share in the investigations of technology-facilitated sex crimes against children. It became clear that relaying on the original agencies sampled as part of N-JOV1 in 2000 would exclude a critical portion of agencies with known expertise to investigate these crimes.

  • In addition to reflecting the population change in the ICAC structure, a new and independent sample reflects other population changes – new agencies, closed agencies etc.

  • A new sample design can more easily include changes in stratification, allocation and probabilities of selection – some of which may be required or appropriate given the population changes.

  • Some degree of overlap between NJOV and previous NJOVs is expected by chance and could well be sufficient to provide the gains in the precision of estimates of change that would otherwise come with more complicated overlap control procedures with previous NJOV samples.



The N-JOV4 Sample. The N-JOV4 agencies will be sampled according to a stratified design based on known expertise in the investigation of technology-facilitated sex crimes against minors. Frame 1 will consist of all 61 ICAC Task Force host agencies and three federal agencies – FBI, US Postal Service, and Homeland Security for a total of 64 agencies included with certainty. Frame 2 will consist of a random sample of one-third of the ICAC affiliate agencies for a total of 1,600 agencies. Frame 3 will consist of a random sample of 10% of all remaining municipal, county and state law enforcement agencies in the U.S. for a total of 1,025 agencies. This results in a grand total of 2,689 law enforcement agencies in the N-JOV4 sample. Table 1 summarizes the sampling frame for N-JOV4.

Table 1. Population and sample sizes by frame


Size

Population N

Sample n

Sampling rate

Estimated response n

(~86% rate)

All agencies


15,180

2,689


2,313

First Frame

ICAC Task Forces and Federal agencies

64

64

100%

55

Second Frame

ICAC Affiliate agencies

4,858

1,600

33%

1,376

Third Frame

All other agencies

10,258

1,025

10%

882



We have complete lists of all 61 ICAC Task Forces and the 4,858 ICAC affiliate agencies from Expert Panel Member, Jeffrey Gersh, Deputy Associate Administrator, Special Victims and Violent Offenders Division, Office of Juvenile Justice & Delinquency Prevention, U.S. Department of Justice. The ICAC Task Force list comes with the name and contact information for each ICAC Commander. Names and contact information for the sampled ICAC affiliate agencies will be identified through the National Public Safety Information Bureau (NPSIB) (http://www.safetysource.com/ which is updated annually and includes the name of the agency head and their contact information, region, county, agency size, and email addresses for most agencies. This list, which includes all municipal, county and state law enforcement agencies in the U.S. has successfully been used as the basis of identifying the names and mailing addresses for all previous N-JOV studies – studies which have resulted in extremely high response rates of between 86% and 88% of eligible LEAs. The third frame agencies will also be identified through this list. To help ensure full coverage of all eligible US law enforcement agencies, we will cross-reference NPSIB with the Law Enforcement Agency Roster (LEAR) developed by the Bureau of Justice Statistics, which is available through the National Archive of Criminal Justice Data.2

Before coming to this decision, a comparison was made between the NPSIB and LEAR. Although the LEAR provides a census of 15,810 active, general purpose law enforcement agencies, including 12,695 local and county police departments, 2,066 sheriff’s offices and 49 primary state police departments, it does not provide a contact name for the Chief of Police or other department head. Being able to personalize the cover letter for the N-JOV screener mailings is critical to our past successful response rates. Further, the LEAR has not been updated since 2016 and past experience with numerous national law enforcement studies indicates the large degree of turn-over in law enforcement agencies, especially those in small, rural communities (which constitutes the majority of law enforcement agencies in the U.S.). Such agencies can combine with other nearby small agencies, close entirely, have frequent turn-over in department heads, and changes in mailing addresses. This makes the use of the continually updated NPSIB list as the base list critical for the N-JOV4 study. If an updated version of the LEAR becomes available prior to drawing the N-JOV4 sample, we will reassess this decision.

All except the Federal agencies will receive the mail screener. Conversations with each of the federal agencies included in this study revealed a large volume of eligible cases and thus we will work directly with agency contacts to gather their 2019 arrest numbers.

Agencies will be eligible if they have jurisdiction to conduct criminal investigations of cases involving child sexual assault, child sexual exploitation or the possession or distribution of child sexual exploitation material, a criterion ascertained in the first question of the screener survey.

Based on past N-JOV studies, the N-JOV4 Expert Panel Meeting, and the number of arrests reported by the ICAC Task Forces in 2019 was more than 9,500,3 we anticipate that the number of arrests for technology-facilitated sex crimes against children will likely be higher than the ranges reported in the 2010 N-JOV3 study (8,144, 95% CI = 7,440 – 8,849).4 Given the broad reach of technology, we anticipate most agencies, even those in the third frame, will have made such an arrest or investigated a case involving youth-produced sexual material (regardless of whether an arrest occurred on not) in 2019.

In past N-JOVs we have always identified the most recently completed calendar year as our target time period to capture the most current case data and experiences of law enforcement. We have determined to focus on the calendar year 2019, even though it may not be the most current available window of cases. We are concerned that the disruption of ordinary social activity due to the COVID-19 outbreak in 2020 may have resulted in such dramatic changes in the general public’s isolation and use of technology that we would get an abnormal frequencies of cases that would not be reflective of a typical year otherwise. Conversations with our Expert Panel members who are Commanders of ICAC Task Forces confirmed this assumption.



  1. Procedures for the Collection of Information



    1. N-JOV4 Pilot Study

Our pilot study agencies will consist of five ICAC Task Forces and 20 smaller agencies with no connection to the ICAC Task Forces who will receive letters to provide feedback on search terms and identify eligible cases for the mail screener survey. ICAC Commanders and department heads we receive an email with the mail screener survey, cover letter, and a link to return a completed survey online. An interviewer will call and talk to the person who filled out the screener in each agency to find out how they conducted the search and whether they encountered any difficulties or confusions about what to include. For eligible cases, one case from each ICAC agency, will be selected for follow up on the pilot case-level interview. Cases will be chosen to cover each of the four main types of technology-facilitated sex crimes against minors under study – online enticement, possession of child sexual exploitation material, production of child sexual exploitation material, youth-produced sexual images, non-arrest child rescue cases. Given there have been few changes to the existing case-level interview, this aspect of the pilot study is aimed at identifying any additional changes that might be necessary to response categories that might not have been identified through the Expert Panel. The target year for eligible cases will be 2018 in order to avoid any overlap with the national study.

Non-respondent protocol. Based on prior N-JOV studies with high response rates, a high rate of cooperation for the pilot is expected with only minimal effort expended for non-respondent follow-up. In previous N-JOVs, between 86-88% of eligible law enforcement agencies completed the mail survey screener. Given this, instead of undertaking a full pilot of this phase of the study, we will utilize a combination of email and telephone calls to help promote participation.

Administration of the pilot survey. Administration of the mail screener survey begins with an email to ICAC Commanders and other department heads which includes an invitation letter with requests to utilize suggested search terms to identify eligible cases, as well as to document additional search terms used by their agency. Participants will be asked to complete the mail screener via an online link which will be provided in the invitation letter. This email will be followed with an email reminder approximately two weeks after the initial mailing, asking them to go online and complete the survey if they have not done so. Approximately 4 and 5 weeks after the initial mailing, we will resend invitation emails to non-responding agencies. Finally, research assistants will call the agencies that still have not responded to complete surveys over the telephone. Brief follow-up telephone interviewers will also be conducted with 10 (50%) of the respondents to the mail screener to learn about any problems they encounter with the survey, such as holes in content (e.g., missing response categories, omitted questions critical to the case), language issues; as well as gather more information about the search terms utilized to identify cases. A paper version of the pilot N-JOV mail screener survey is found in Appendix H.


Eligible cases for the case-level telephone interview will be identified through the above mail screener survey. The detailed case interviews with investigators will be conducted by trained interviewers using an online data collection system, Qualtrics. Qualtrics improves the accuracy of telephone interviews because skip patterns and acceptable ranges of responses are programmed and data is entered directly into a statistical analysis program, eliminating errors related to data entry by hand.


The telephone interview will parallel the interviews used in previous studies and gather comparable information, with several sections enhanced to reflect technological changes and emerging issues (see Appendix D for the full case-level interview). Given the minimal change in the content of the case-level interview from our prior N-JOV studies we will utilize this pilot phase to ensure that the questions and prompts are clear and easy to understand and whether they include the appropriate answer categories. Brief questions at the end of the case-level interview will query these points. The completion rate for case-level telephone interviews was 64% of eligible cases in N-JOV3, with similar rates achieved in N-JOV1 and N-JOV2.5 Interviewers will ask as these investigators whether having an option for a web-based self-administered survey would be a useful option.


Report. Following completion of the pilot test, a report on the pilot test implementation and results will be prepared and delivered to NIJ. The report will incorporate insights and comments gleaned from participating investigators with regard to content, language, and functionality of the web survey, as well as a complete item response analysis of problematic or unnecessary questions.

Revisions of for the N-JOV4 National Study. Based on these findings, revisions to the recruitment and data collection protocols will be proposed, and the revised draft plan for the national administration of the NJOV-4 will be delivered. Following review, comment, and approval by NIJ, the final plans for national implementation of the survey will be delivered. An amendment will be submitted to OMB for any substantive changes required for the NJOV-4 based on pilot findings.

    1. N-JOV National Study

Publicizing the Survey. In the months leading up to the launch of the national survey, project staff will work to publicize the survey to law enforcement agencies to generate interest and expectations for the survey’s completion. This will take the form of announcements on relevant listservs for the ICAC Task Forces, NIJ, and OJJDP.

Data Collection.

Phase 1.

The Phase 1 survey packet will include a letter of support from the NIJ social scientist (Appendix I), an invitation letter from the principal investigators describing the study and the voluntary nature of participation (Appendix J), the mail survey screener, including frequently asked questions for the Phase 1 mail screener with a glossary of study terms (Appendix H), a copy of a previous N-JOV report (Appendix K), a business-reply envelope, and a link to complete the mail screener online as a secondary option for returning the screener survey. The FAQs provide answers to common concerns or questions about the study and the survey, with contact information of individuals (names, email addresses and toll-free numbers) should respondents need additional information about the study. Several follow-up mailings are planned in order to obtain a high response rate (Table 2). The protocols are based on both past successful experience with the N-JOV and on current best practices for mail survey administration cited by the well-known survey methodologist Don Dillman.6

Table 2. Schedule of Mailings for National Survey

Activity

Date

Target group

Initial mailing

Week 1

All agencies

Thank you/reminder postcard

Week 3

All agencies

Reminder letter with 2nd copy of screener

Week 5

Non-responding agencies

Reminder letter with 3rd copy of screener

Week 8

Non-responding agencies

Email invitation with web link

Week 10

Non-responding agencies

Telephone calls

Week 12

Non-responding agencies



Approximately 2 weeks after the initial mailing, we will send reminder postcards (Appendix L) to the agency heads, asking them to complete and return the survey if they have not done so. Approximately 5 and 8 weeks after the initial mailing, we will resend copies of the survey, personalized cover letters (Appendix M), and business reply envelopes to the heads of agencies that have not responded. For remaining non-responding agencies approximately 10 weeks after the initial mailing, we then send emails to agencies heads asking them to complete the screener survey via the self-administered web-link. Finally, research assistants will call the agencies that still have not responded to complete surveys over the telephone.


The mail survey instrument will be a multi-page booklet formatted so respondents can follow it easily (Appendix H). It will include a “Frequently Asked Questions” section, a glossary and a toll-free telephone number so that respondents can contact the researchers for additional information. This agency-level mail survey is also designed to gather a listing of specific cases from which we will draw a random sample for in-depth case-level interviews with key investigating officers.


Technology-facilitated child sexual exploitation. To identify these cases we will ask:


  • Between January 1, 2019 and December 31, 2019, did your agency make ANY ARRESTS in cases involving the attempted or completed sexual exploitation of a minor, AND at least one of the following occurred: a) the offender and the victim first met through technology, or b) the offender committed a sexual offense where technology was used to facilitate the crime in some way (e.g., grooming, trafficking), regardless of whether or not they first met online?

  • Between January 1, 2019 and December 31, 2019, did your agency make ANY ARRESTS in cases involving the possession, distribution, access, or production of child sexual exploitation material (i.e., child pornography), and at least one of the following occurred:

    • Illegal images were found on technology (cloud, computer, flash drives, memory cards, tablet, cell phone, etc.) possessed or accessed by the suspect;

    • The suspect used technology to order or sell child sexual exploitation material;

    • There was other evidence that illegal images were downloaded from the internet or distributed by the suspect using technology; or

    • The suspect was using streaming apps to view live video of child sexual exploitation.


If respondents answer “Yes” to any of these questions, we will ask them to list the case number or other case reference and provide the name and telephone phone number and email address of the key investigator or most knowledgeable person for each case they report.


Youth-produced sexual images. To identify these cases, the survey asks the following question specifying first that this only includes cases where no one was arrested:


  • Between January 1, 2019 and December 31, 2019, did your agency handle any cases that did not result in an arrest that involved sexual images created by minors (age 17 or younger) AND these images were or could have been child sexual exploitation material under the statutes of your jurisdiction? Please include:

    • Cases where minors took pictures of themselves OR other minors, including “sexting”;

    • Cases that may have been crimes, but were not prosecuted for various reasons;

    • Cases that were handled as juvenile offenses; and

    • Other cases involving sexual images produced by juveniles and an arrest was not made.


If yes, respondents of the mail survey will be asked to indicate the total number of these cases their agency had handled, regardless of how they were resolved and to provide contact information for the key investigator.


Criteria for eligibility. To be included in the N-JOV Study, cases have to be technology-facilitated, involve victims younger than 18 and end in arrests that occurred between January 1 and December 31, 2019.

  1. Technology-facilitated. A case was technology-facilitated if: 1) an suspect-victim relationship was initiated through technology; 2) a suspect who was a family member, acquaintance, or stranger to a victim used technology to communicate with a victim to further a sexual victimization, or otherwise exploit the victim; 3) a case involved an technology-related undercover investigation; 4) child sexual exploitation material was received or distributed online, or arrangements for receiving or distributing were made online; or 5) child sexual exploitation material was found on a computer, on removable media (flash drives, CDs, etc.), on Cloud, on cell phones, or in some other digital format.

  2. Victims younger than 18. We use this definition of minors because 18 is the age of majority for most purposes in most jurisdictions. We do not want to rely on state or federal statutes that define “age of consent,” because these statutes vary considerably. However, eighteen is the upper age limit for any statutes defining age of consent. Also, federal and many state statutes define child sexual exploitation material as images of minors younger than 18. We considered cases to have victims under 18 in three situations: 1) there was an “identified victim,” defined as a victim who was identified and contacted by the police in the course of the investigation, who was under 18; 2) a law enforcement investigator impersonated a youth under 18, so that the suspect believed s/he was interacting with a minor; and 3) a case involved child sexual exploitation material, which by definition depicts the sexual abuse or exploitation of a minor under 18.

  3. End in arrest. We limit the study to cases ending in arrests, rather than crime reports or open investigations because cases ending in arrests: 1) are more likely to involve actual crimes; 2) have more complete information about the crimes, suspects and victims; 3) give us a clear standard for counting cases; and 4) help us avoid interviewing multiple agencies about the same case.



In addition to these cases ending in arrest, we also will ask about cases involving youth who produced sexual material (i.e., sexting). To be eligible, these cases must: 1) involve sexual pictures that were taken by someone age 17 or younger and could be considered child sexual exploitation material under the laws of the respondent’s jurisdiction and 2) be handled by the respondent’s agency during 2019. Cases do not have to end with an arrest to be eligible for these incidents.


The Phase 1 mail screener will also include three questions at the end aimed at gathering some broader information about the volume of reports received and the process of triaging reports for investigation to provide further context about work being conducted by law enforcement in this area. These questions are included based on recommendations of our Expert Panel. Specifically, the following two questions will be asked of all agency respondents:

1.

In the sections above we asked for information on arrests for technology-facilitated sex crimes against children. We’re also interested in learning about the total volume of reports your agency received in 2019. Approximately how many reports did your agency receive in 2019 for technology-facilitated sex crimes against children, regardless of whether an arrest was made or not? __________ # reports




Approximately how many of these reports, if any, were received from the CyberTipline (National Center for Missing and Exploited Children)? ________ # reports




2.

Is the number of referrals of technology-facilitated sex crimes against children so large that you have to use a system for triaging or setting priority among cases?


  • Yes


  • No


  • Not sure


If yes, can you indicate which of the following are very important, somewhat important or not important when triaging cases:



Very important

Somewhat important

Not important


Amount of identifying information about the suspect





Amount of identifying information about a victim(s)





Amount of time elapsed between the evidence and receiving the report in your agency





Confirmation of illegal content or activity





Volume of illegal content





Extremity of the illegal content





Whether the suspect has access to children





Which technology platforms are involved





Source of the report





Indicators of violence





Agency resources





Something else

______________________________________________________________________________



Phase 2. In the second phase of the national study, a random sample of the key investigating officers for cases involving a technology-facilitated sex crime against a child identified in Phase 1 will be contacted and asked to complete a case-level telephone interview providing details about the case.


Following our strategy in previous NJOV studies, sampling will take into account the number of cases reported by agencies, so that agencies with large numbers of cases will not be unduly burdened. If an agency reports between 1 and 3 cases, we will conduct interviews for each case (In N-JOV3, 61% of LEAs had 3 or fewer cases.). For agencies with more than 3 cases, we will conduct interviews for all cases that involve identified victims (victims who were located and contacted), and sample other cases. More specifically, for agencies with between 4 and 15 cases, 50% of cases will be randomly selected for interviews. For agencies with 16 or more cases, we will conduct interviews on a randomly selected 10% ‐ 25%, depending on respondent availability.


The detailed case interviews with investigators will be conducted by trained interviewers using an online data collection system, Qualtrics, supplemented with a self-administered online option. Qualtrics improves the accuracy of telephone interviews because skip patterns and acceptable ranges of responses are programmed and data is entered directly into a statistical analysis program, eliminating errors related to data entry by hand. The interviewers will attend training sessions that will provide extensive details about the background, purpose and instrumentation of the study, and they will participate in practice and pilot interviews before beginning data collection. The interviewers will call investigators listed in the mail surveys, ask for consent to participate in a research study (Appendix N), and schedule interviews on specific cases at their convenience.

The telephone interview covers several topics broken down as follows (see Appendix D):

  • Preliminary Questions (ASKED IN ALL CASES): This section is used in all cases and includes questions about whether a case is eligible for the study, the cases characteristics (e.g., online enticement, undercover operation), agency role in the investigation, involvement of other agencies, number of victims and suspects (and identifying a primary to ask questions about if more than one), and a narrative description of the case.

  • Online Enticement Questions (ASKED IN ELIGIBLE ARREST CASES): This section is used in cases with identified victims (i.e., victims that were identified and located) and an arrest was made. In cases where the victim first met the offender on the Internet or via cell phone, this section collected specific information about where online the victim and offender met, how they communicated both on- and off-line, whether they met in person, the details of any sexual assault, and other information. In cases with identified victims where the offender and victim did not meet on the Internet, but knew each other in some other capacity (i.e., family and prior acquaintance cases), this section collected specific information about the offender-victim relationship, including how they knew each other, how the technology was used, the details of any sexual assault, and other information about what transpired between the offender and victim during the course of the crime.

  • Undercover Investigation Questions (ASKED IN ELIGIBLE ARREST CASES): This section is used in cases that involved an arrest where and online undercover operations in which law enforcement investigators 1) posed online as minors or adults with access to minors, 2) took over the identities of identified victims, 3) posed as distributors or consumers of child pornography or 4) posed as someone interested in child sexual exploitation material. In all cases, we gather information so that we can categorize the type of undercover investigation. We gathered detailed information only about cases in which law enforcement posed online as minors. This section includes questions about the extent and nature of the online interactions between the offender and undercover investigator and information about face-to-face meetings between offenders and investigators, when applicable.

  • Child Sexual Exploitation Material Production Questions (ASKED IN ELIGIBLE ARREST CASES): This section is used in cases involving an arrest where there was an identified victim of a technology-facilitated crime who was also a victim of child sexual exploitation material production or was photographed by a suspect in a sexually suggestive pose. This section collects information about the format, number, content and distribution of the produced images.

  • Child Sexual Exploitation Material Possession and Distribution Questions (ASKED IN ELIGIBLE ARREST CASES): This section is used if the suspect possessed child sexual exploitation material and an arrest was made. It collects information about the format, number, content and distribution of the possessed child sexual exploitation material.

  • Offender Characteristics (ASKED IN ALL ARREST CASES): This section asks about the primary suspect arrested in the case, including demographic characteristics, life circumstances at the time of the crime, prior offenses, and status in the criminal justice system. It also asked questions about social interactions, access to children, mental and behavioral health problems, and type of technology used in the incident.

  • Victim Characteristics (ASKED IN ALL ARREST CASES): This section asks about the primary child victim in an arrest case, including demographic characteristics, family living arrangement, and whether the child has a physical disability, mental health concerns, any medical conditions, or problems such as substance use. It also includes items on the child victim’s involvement in the criminal justice system and whether the child was referred for services as a result of the incident.

  • Youth Produced Sexual Material Questions (ASKED IN NON-ARREST CASES): This section asks about the people involved in cases where youth produce sexual material, but an arrest is not made. In these situations, there may not always be a clear suspect or victim. Items also ask about the content of the sexual material, motivations behind production, and distribution. Items also identify two people most involved in the incident and gather demographic characteristics, as well as mental and behavior health details.


Based on preliminary testing of the N-JOV4 survey using records on technology-facilitated sex crime cases from prior N-JOV studies, the Phase 2 survey is expected to take about 40 minutes, including the time to review participant consent documents. The completion rate for case-level telephone interviews was 64% of eligible cases in N-JOV3, with similar rates achieved in N-JOV1 and N-JOV2.7 To help enhance response rates for this portion of the study from previous years the research team will utilize online scheduling of interviews to simplify this process for investigators and use both telephone and email as a means of reaching investigators. In prior N-JOVs email addresses were not available for police.


Data retrieval for inconsistencies and item nonresponse. After the data is obtained for both the responding agencies and investigators, it will be cleaned and edited, and the imputation of item missing data will be evaluated and considered. The cleaning and editing process will consist of the following steps: a) Identification of a subset of key study variables (aka critical items), completion of which is required to be considered a complete response; b) Review of skip patterns and other sources of internal consistency; and c) Checks of ranges and values of all variables. After the data is cleaned and edited, item missing data rates will be produced and reviewed, especially for the critical items. The number of variables with missing data, their missing data rates, and their patterns in combination will inform us with respect to the need, scope and details of the imputation approach. Depending on these characteristics, we envision using Hotdeck imputation,8 predictive regression models,9 or some combination. The final N-JOV4 data will be merged into the existing data file that includes all cases from prior N-JOVs for trend analysis.


We will also provide written documentation of data processing procedures and data editing and cleaning as well as a final nonresponse analysis, including all item imputation and unit weights necessary to produce national-level estimates and for standard error calculations.


Data coding. After data collection, project staff will begin cleaning and coding the data. Initial attention will be paid towards identifying cases that represent duplicates in our data set – cases that were handled by more than one agency. We have built-in a number of questions about other agency involvement in the Preliminary Section of our Phase 2 case-level survey for this purpose. In past N-JOVs we’ve had approximately 5% of cases identified as duplicates using these questions. When identified, these duplicate cases will be reviewed for overall content and role in the case and a final decision as to which agency to link the case with after review.

A data manager will also supervise the coding of open-ended responses or those that did not fit given categories (e.g. “Other, specify”). Research assistants monitored by the data manager will double coded open-ended responses, compare coded responses for discrepancies, and review and resolve discrepancies with one of the lead investigators.

Responding Sample Size, Precision, and Power. The agency-level sample sizes and response rates proposed suggests expected ideal responding agency sample sizes of 2,313 overall. Table 3 provides the precision that can be expected with this agency sample size.


Table 3: Precision for expected agency sample sizes

P

Q

n

Var

Ste

RSE

LCI

UCI

30.00%

70.00%

2,313

0.00009

0.00953

3.18%

28.13%

31.87%

40.00%

60.00%

2,313

0.00010

0.01019

2.55%

38.00%

42.00%

50.00%

50.00%

2,313

0.00011

0.01040

2.08%

47.96%

52.04%

60.00%

40.00%

2,313

0.00010

0.01019

1.70%

58.00%

62.00%

70.00%

30.00%

2,313

0.00009

0.00953

1.36%

68.13%

71.87%









30.00%

70.00%

1,376

0.00015

0.01235

4.12%

27.58%

32.42%

40.00%

60.00%

1,376

0.00017

0.01321

3.30%

37.41%

42.59%

50.00%

50.00%

1,376

0.00018

0.01348

2.70%

47.36%

52.64%

60.00%

40.00%

1,376

0.00017

0.01321

2.20%

57.41%

62.59%

70.00%

30.00%

1,376

0.00015

0.01235

1.76%

67.58%

72.42%









30.00%

70.00%

882

0.00024

0.01543

5.14%

26.98%

33.02%

40.00%

60.00%

882

0.00027

0.01650

4.12%

36.77%

43.23%

50.00%

50.00%

882

0.00028

0.01684

3.37%

46.70%

53.30%

60.00%

40.00%

882

0.00027

0.01650

2.75%

56.77%

63.23%

70.00%

30.00%

882

0.00024

0.01543

2.20%

66.98%

73.02%









30.00%

70.00%

55

0.00382

0.06179

20.60%

17.89%

42.11%

40.00%

60.00%

55

0.00436

0.06606

16.51%

27.05%

52.95%

50.00%

50.00%

55

0.00455

0.06742

13.48%

36.79%

63.21%

60.00%

40.00%

55

0.00436

0.06606

11.01%

47.05%

72.95%

70.00%

30.00%

55

0.00382

0.06179

8.83%

57.89%

82.11%

Where: P = the estimated population proportion; Q = 1 – P; n = the expected case sample size; Var = the variance of the estimate of P; Ste = the standard error of the estimate of P; RSE = the relative standard error of the estimate of P = Ste / P; LCI, UVI = the lower and upper confidence interval end points on the estimate of P


In addition to precision, the statistical power (i.e., the probability of correctly rejecting the null hypothesis) of the expected responding overall agency sample size may be of interest, for comparisons of subgroups or differences in measures, as seen in Table 4.


Table 4: Effect sizes for 80% power given responding agency sample sizes

Responding sample size

P1

P2

Effect size

2313

0.5

0.542

0.042

1376

0.5

0.555

0.055

882

0.5

0.568

0.068

55

0.5

0.771

0.271

P = the estimated population proportion



Power and precision. The case-level sample sizes and response rates proposed/expected suggest an expected responding case sample size of 2,000 cases. Of the 2,000 cases, we except 1,250 to involve arrests (no youth produced images), 323 youth produced images that involved arrests and 427 to involve youth produced images (no arrests) for a total of 1,573 arrest cases and 750 youth produced images cases (arrest and non-arrest). Table 5 provides the precision that can be expected with these sample sizes. In addition to precision, the statistical power (i.e., the probability of correctly rejecting the null hypothesis) of the expected responding case sample sizes may be of interest, for comparisons of subgroups or differences in measures, and is depicted in Table 6.

Table 5: Precision for expected case sample sizes

P

Q

n

Var

Ste

RSE

LCI

UCI

30%

70%

2,000

0.00011

0.01025

3.4%

28.0%

32.0%

40%

60%

2,000

0.00012

0.01095

2.7%

37.9%

42.1%

50%

50%

2,000

0.00013

0.01118

2.2%

47.8%

52.2%

60%

40%

2,000

0.00012

0.01095

1.8%

57.9%

62.1%

70%

30%

2,000

0.00011

0.01025

1.5%

68.0%

72.0%

Where: P = the estimated population proportion; Q = 1 – P; n = the expected case sample size; Var = the variance of the estimate of P; Ste = the standard error of the estimate of P; RSE = the relative standard error of the estimate of P = Ste / P; LCI, UVI = the lower and upper confidence interval end points on the estimate of P.




Table 6: Effect sizes for 80% power given responding case sample sizes

Responding sample size

P1

P2

Effect size

2000

0.5

0.545

0.045

1250

0.5

0.557

0.057

750

0.5

0.573

0.073


Weighting and Variance Estimation. Since the study data will be collected from a nationally representative probability sample of LEAs and cases within LEAs, data weighting will be required to enable unbiased estimation. Also, a mechanism will be required for variance estimation, to reflect the uncertainty in study estimates due to using responses from a subset of the full population. Data weighting will consist of two steps; each step will be similar to but different at the agency- and case-level. First, base weights will be calculated as the reciprocal of the probability of selection for each and every LEA, and for each case sampled within an LEA. The base weights for cases will include the LEA base weight as a component. Second, nonresponse-adjusted weights will be calculated as the ratio of the sum of base weights for all LEAs or cases to the sum of base weights to responding LEAs or cases within nonresponse adjustment cells. The nonresponse adjustment cells will be formed based on characteristics found to be related to response propensity via classification algorithms – the latter possibly including LEA frame along with other LEA frame variables, and for cases depending upon the information available at the frame level regarding case type, for example. For variance estimation, a Taylor Series10 approach will be supported through the provision of stratum and primary sampling unit identification variables.


Nonresponse Bias Analysis. The Office of Management and Budget (OMB) guidelines call for conducting a nonresponse bias analysis when the response rate is less than 80%. The two main concerns with nonresponse are the effects on sample size (and thus on data collection costs and the precision of the estimates) and on sample representativeness (and thus on the accuracy/bias of the estimates). With lower expected response rates, data collection costs could increase if more LEAs or cases must be sampled, contacted, and recruited to participate.


The potential effect on estimate accuracy is troubling because it is difficult to assess how much bias is introduced due to nonresponse. A nonresponse bias analysis assesses the key components that can lead to bias in an estimate.


For example, the bias in an estimated mean computed using the inverse of the selection probability weights can be written as b(y)(1-rrw)(yr -ym), where rrw is the weighted response rate, and the quantity in the parentheses is the difference in the population means of the respondents and nonrespondents. If we think about this expression for subgroups, we see that biases differ when the weighted response rates differ for the subgroups or when the means of the respondents and nonrespondents differ for these subgroups. To analyze these relationships in a survey, one must have data on respondents and nonrespondents so the weighted response rates and differences in the weighted means can be computed. While the auxiliary data for conducting such an analysis is limited (for example, LEA type, size, population served etc.), it is nonetheless informative.


For assessing the effects of response rates on estimates, a range of nonresponse bias analysis types are available and all that are applicable will be used with the N-JOV4 study data. Nonresponse bias analysis methods can be classified into five broad types, based on the analysis variable(s) used:

  1. Response rate or response propensity methods

  2. Differences in and correlates of continuous frame variables

  3. Level-of-effort analyses

  4. Comparisons and/or adjustments to external and auxiliary variables and totals


Sensitivity analyses. The five nonresponse bias analysis types listed above will be applied, as warranted, separately to each of the Phase I (i.e., Agency) and Phase II (i.e., Case) level surveys. Given the expected response rates provided above (86% and 64%, respectively), this could include conducting a nonresponse bias analysis for the Phase II (case) survey only. The nature of N-JOV4’s research questions, the population of inference and the study-eligible case definition are such that the fourth category may apply only to agency frame 1 and 2 and not 3 (see Table 1), given the similar study-eligible case counts that can be provided for the ICAC task forces and their affiliates, but not for other agencies.. Data should be available, however, for conducting the analyses in the remaining four categories, which span ten specific non-response bias analysis types in Westat’s proprietary NRBA macros.


Analysis and Reporting. Following data cleaning and weighting, three analysis files will be developed: 1) agency-level file, 2) cross-section (N-JOV4 only) case level file, and 3) trend (all four N-JOVs) data file. These files will be used to conduct analyses according to the plans developed by project staff and approved by NIJ. In consultation with NIJ, the project team will produce tables with estimates of findings on the:

  • Incidence of arrests for technology-facilitated sex crimes against minors;

  • Incidence of differ types of these crimes, including CSEM production, CSEM possession, sexual exploitation by someone met online, and sexual exploitation by family member or acquaintance;

  • Incidence of undercover operations for these crimes;

  • Incidence of non-arrest cases involving youth produced images coming to law enforcement attention;

  • Trends in incidence rates across the four N-JOV studies (spanning from year 2000 to 2019);

  • The number and characteristics of victims identified in these crimes;

  • The number and characteristics of suspects in these crimes; and

  • Trends in the characteristics of these cases over time



  1. Method to Maximize Response Rates and Deal with Nonresponse



Multiple strategies will be used to maximize response rates, the majority of which have been successfully used in the three past N-JOV studies. These strategies include the design of the mail screener survey which is simple and easy to complete, a toll-free number for questions, the use of a series of mailing which include multiple copies of the mail screener survey, and telephone calls to non- agencies who have not returned the mail screener after several attempts.



In addition, we plan to implement some new strategies to maintain or further enhance response rates. First, we plan to implement an online call scheduling system to reduce the number of calls between the interviewer and participant to schedule the case-level interviews. With this system participants can view available times online and book the slot that’s most convenient for them. Participants who prefer to call can do so, and interviewers can enter their information manually. Either way, confirmations are sent immediately and automatically. It is also easy for participants to change or cancel an appointment. All they have to do is click the link contained in the appointment confirmation email.

Second, an online self-administered option will be made available to complete the mail screener survey. Some department heads may prefer the ease and flexibility of completing and submitting the screener survey online. This may help encourage some agencies to participate who would otherwise not. These dual options for the national screener survey should help maintain or increase response rates for this portion of the study.

Third, the research team will publicize the study to generate interest and expectation for the survey’s completion among law enforcement agencies. In the months leading up to the launch of the national study, the research team will make announcements through relevant listservs and have study partners who are connected with the ICAC program notify Commanders.

Finally, the research team will include a copy of a report from previous N-JOV findings so participants can see how this data is aggregated and utilized to help law enforcement.



  1. Tests of Procedures to be Undertaken



N-JOV4 will include a pilot test of the N-JOV methodology for identifying and providing details of technology-facilitated sex crimes against children as well as a national survey. This submission describes the pilot test and national survey in detail.









  1. Consultation Information

The NIJ contact is:

Benjamin Adams

Social Science Analyst

National Institute of Justice

810 Seventh St, NW

Washington, DC 20531

Benjamin.Adams@usdoj.gov

202-616-3687


The Principal Investigator is:


Kimberly Mitchell Lema

Research Associate Professor

Crimes against Children Research Center

University of New Hampshire

10 West Edge Drive, Suite 106

Durham, NH 03840

Kimberly.Mitchell@unh.edu

603-862-4533

1 Wolak J, Mitchell K, Finkelhor D. Methodology Report: 3rd National Juvenile Online Victimization

(NJOV3) Study. Durham, NH: Crimes against Children Research Center;2011.


2 National Archive of Criminal Justice Data. Law Enforcement Agency Roster (LEAR), 2016. Institute for Social Research. https://www.icpsr.umich.edu/web/NACJD/studies/36697.

3 Office of Juvenile Justice and Delinquency Prevention. Internet Crimesw Against Children Task Force Program. U.S. Department of Justice,

Office of Justice Programs. https://ojjdp.ojp.gov/programs/internet-crimes-against-children-task-force-program. Published 2020. Accessed

October 9, 2020.

4 Wolak J, Finkelhor D, Mitchell KJ. Trends in Law Enforcement Responses to Technologyfacilitated Child Sexual Exploitation Crimes: The Third National Juvenile Online Victimization Study (NJOV3). University of New Hampshire: Crimes against Children Research Center;2012.

5 Wolak J, Mitchell K, Finkelhor D. Methodology Report: 3rd National Juvenile Online Victimization (NJOV3) Study. Durham, NH: Crimes against Children Research Center;2011.

6 Dillman, D. A., Smyth, J. D., and Christian, L. M. (2014). Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method. (4th ed.). Hoboken, NJ: Wiley.


7 Wolak J, Mitchell K, Finkelhor D. Methodology Report: 3rd National Juvenile Online Victimization (NJOV3) Study. Durham, NH: Crimes against Children Research Center;2011.

8 Fellegi IP, Holt D. A systematic approach to automatic edit and imputation. Journal of the American Statistical Association. 1976;71(353):1735.

9 Paulin GD, Ferraro DL. Imputing income in the consumer expenditure survey. Monthly Labor Review. 1994;117(12):23-31.

10 Woodruff RS. A simple method for approximating the variance of a complicated estimate. Journal of the American Statistical Association.

1971;66(334):411-414.


Page 22 of 22


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMitchell Lema, Kimberly
File Modified0000-00-00
File Created2022-04-21

© 2024 OMB.report | Privacy Policy