B. Statistical Methods
1. Universe and Respondent Selection
The universe for the 2018 Census of State and Local Law Enforcement Agencies (CSLLEA) will consist of all state, county, local and tribal law enforcement agencies in the 50 states and District of Columbia that are (1) publicly funded and (2) employ at least the equivalent of one full-time sworn officer with general arrest powers. This will include local, county, and regional agencies; state police and highway patrol agencies; sheriff’s offices; and special purpose agencies that are focused on tribal lands, public facilities, natural resources, transportation systems, criminal investigations, and other types of targeted enforcement agencies.
To develop this universe, RTI first developed the Law Enforcement Agency Roster (LEAR) focusing on general purpose agencies. In order to prepare for the 2018 CSLLEA, RTI conducted further analysis of existing agency lists and added special purpose agencies to the LEAR (and renamed it the Agency Record Management System (ARMS)). The ARMS also added functionality to allow BJS to continuously update contact information. Finally, the 2018 CSLLEA universe was selected from the ARMS based on the criteria identified above.1 The following section details the steps taken in each phase of the universe development in more detail.
LEAR Development. The LEAR was created between summer 2015 and winter 2016; work consisted of seven stages using information from multiple datasets:
Stage 1: The 2008 and 2014 CSLLEA universe files were linked. The resulting file served as the basis of the LEAR.
Stage 2: Information from five additional data files was appended to supplement the LEAR: (1) 2013 Law Enforcement Management and Administrative Statistics (LEMAS) survey, (2) Law Enforcement Agency Identifiers Crosswalk (LEAIC), (3) Federal Bureau of Investigation’s (FBI) Police Employee Data, (4) validation data from frame cleaning conducted during the Survey of Law Enforcement Personnel in Schools (SLEPS), and (5) agency lists generated by each state Peace Officer Standards and Training (POST). These datasets provided both additional cases and additional variables (e.g., officer staffing counts).
Stage 3: The merged datasets were de-duplicated. Some source datasets did not contain sufficiently unique identifiers which resulted in duplicate records. Records were reviewed and assigned duplicate identifiers where appropriate.
Stage 4: Cases were reviewed for validity. This involved a combination of online research and direct agency contact.
Stage 5: Logic checks were conducted on agency information and identifying records that failed basic consistency checks (e.g., large discrepancies in sworn officer staffing size between different datasets).
Stage 6: Two additional files from the 2014 CSLLEA were verified. First, a subset of cases was marked as out-of-scope. These cases were reviewed to ensure that this was an accurate disposition. Second, a set of cases that had disposition changes between the intermediate 2014 CSLLEA file and the final 2014 CSLLEA file were reviewed for accuracy.
Stage 7: Cases with eligibility changes between the interim and final datasets were reviewed. Agencies marked as out-of-scope were reviewed to determine if this disposition was accurate and consistent with the inclusion criteria for the CSLLEA universe file.
RTI was the data collection agent for both the 2016 LEMAS Body-Worn Camera Supplement (BWCS) and the 2016 LEMAS Core survey. To prepare for these data collection efforts, RTI conducted additional research on local, sheriff, and the primary state law enforcement agencies through the development of the Law Enforcement Agency Roster (LEAR), which was conducted under the 2014 CSLLEA clearance (OMB 1121-0346). Data were collected to understand agency sworn staffing size, responsibilities, and in-service status. Finally, data collected through the 2016 BWCS and 2016 LEMAS Core were incorporated into the LEAR. This included information on in-service status, officer staffing size, agency chief executive, and contact information (e.g., telephone and mailing address).
ARMS Expansion of LEAR. After completion of the 2016 LEMAS, the LEAR was upgraded into the ARMS to allow for better tracking of changes to agency records and easier updating of contact information. Within the new ARMS system, further work was done to update the current list of general purpose agencies and clean the special purpose agencies within LEAR that were not included in the LEMAS frames.
RTI conducted research on state-level chief’s and sheriff’s associations to identify additional in-scope agencies. Some of these lists were available on their websites. In some cases, RTI emailed the organization’s point-of-contact to verify that the online membership list was up-to-date. Other associations did not offer lists online. In those cases, RTI contacted the association to request their list. RTI located 49 chief’s associations (all states except Hawaii) and 44 sheriff’s associations (Wyoming has a combined association for chiefs and sheriffs), with 23 providing updated lists via email or verifying their online lists were current. RTI also contacted 16 special association groups but all refused to provide the list or did not respond to the request. In total, over 11,000 agencies were represented on these membership lists. This combined agency list was programmatically matched with existing LEAR agencies to identify new in-scope cases. After automated matching, approximately 800 cases required manual review and classification. No law enforcement agencies were contacted by RTI directly to update their point of contact information.
RTI conducted a similar process to identify special purpose agencies, and that work underwent manual review to determine their in-service and in-scope status. Cases went through a multi-stage review process, starting with web-based research. A majority of cases could be updated for in-service and in-scope status through publicly accessible information. Cases with insufficient information available online underwent further follow-up through email and phone contact. This work was only done to establish if an agency was operational and in-scope for the CSLLEA.
Elimination of Out-of-Scope Agencies for CSLLEA. The following actions have been taken to reduce the possibility of including out of scope agencies.
Agency in-service status has been verified across multiple datasets, including state Peace Officer Standards and Training (POST) lists and chief of police and sheriff association lists, to reduce the burden on cities without a law enforcement agency.
Non-publicly funded law enforcement agencies (e.g., private universities) have been vetted to reduce out-of-scope agency participation.
Publicly available information was reviewed to determine if agencies do not have general sworn law enforcement authority.
Agency size has been checked across a variety of sources, including past CSLLEA waves, 2014 Federal Bureau of Investigation’s (FBI) Police Employee data, and 2016 Law Enforcement and Management Administrative Statistics (LEMAS) core, to attempt to minimize the number of agencies in the frame with less than one full-time equivalent sworn officer.
Agency chief executive information has been updated based on web searches and updated point of contact information received from the 2016 BWCS and 2016 LEMAS core to reduce respondent need to manually update this information.
Although the development and review of the CSLLEA universe is not finished, an estimated 20,000 agencies will be defined as in-scope for the collection.
2. Procedures for Collecting Information
Data Collection Procedures. The 2018 CSLLEA will employ a multi-mode approach that relies primarily on web-based data collection; nonresponse follow-up efforts will allow for both hard copy and telephone survey response. When the 2014 CSLLEA ended data collection, there was an overall response rate of 84% (about 16,000 agencies). Of these, almost 9,000 agencies (56%) responded via web. Due to increased web-based capabilities of law enforcement agencies and the project’s strong encouragement to respond using the web-based data collection tool, BJS expects that a majority of the agencies responding to the 2018 CSLLEA will use the web-based option.
There is some ambiguity in the experimental literature about whether research should offer a choice of web-based or paper modes concurrently or sequentially. Studies have found that offering a concurrent choice in mode does little to improve response rates. Millar and Dillman (2011) found among college students that the concurrent approach did not improve response; however, it did not significantly diminish response rates either. Yet, what Millar and Dillman (2011) did find is that a sequential approach of web then mail improved response. Medway and Fulton (2012) found in their meta-analysis, which included 19 experimental comparisons, that offering concurrent web-based and mail surveys reduced responses. It should be noted that these results are from individual-level surveys, many conducted among college student samples, and not based on establishment surveys.
In order to maximize efficiency and decrease costs over the long-term, the initial mailing will not include a copy of the paper instrument.2 Data collection will begin with a survey invitation letter (mailed via USPS) and an email to the point of contact (POC) for each LEA to inform him or her about the survey. This letter will be signed by the Director of BJS and explain the purpose and significance of the survey. It will include the survey web address and agency-specific log-in credentials (Attachment 5). The survey invitation letter will also provide a toll-free telephone number and project-specific e-mail address for the survey Help Desk, should the POC have any questions. Instructions for changing the POC via the survey website, fax, or telephone will be included in the event the LEA needs to change the POC to a more appropriate person. Included with the survey invitation letter will be an informational flyer (Attachment 6). The flyer will describe the overall BJS law enforcement survey program and how the CSLLEA relates to the LEMAS and other BJS collections. The flyer will also explain the reduced scope of the CSLLEA survey content and lower burden, and the importance of agency participation in each survey. Accompanying this lead letter will be a letter of support signed by the major law enforcement organization in the U.S. (i.e., International Association of Chiefs of Police (IACP), National Sheriffs’ Association (NSA), Major County Sheriffs of America (MCSA), Major City Chiefs of Police (MCCP), Police Executive Research Forum (PERF), and Association of State Criminal Investigative Agencies (ASCIA)) (Attachment 7). Lastly, a POC Update Form (Attachment 8) will also be included so that the recipient can use it to fax contact information for a newly designated POC.
After the invitation letter mailing, agencies will receive additional communications during the data collection period.
Approximately one week after sending the survey invitation letter, RTI will send an e-mail message that is identical to the survey invitation letter (Attachment 9) to those recipients for whom an email address is available to confirm receipt of the study materials.
Three weeks after sending the survey invitation letters, a first reminder email will be sent to POCs at nonresponding agencies, including those who are newly identified (Attachment 10). These emails, signed by the BJS Project Manager, will express the importance of the CSLLEA to the LEA community and encourage response via the online survey (or paper copy, if preferred).
Three weeks after the first reminder emails are sent, RTI will send a second reminder— a letter via USPS—to POCs in agencies that have not responded (Attachment 11). This mailing will also include a copy of the paper questionnaire (Attachment 1) and return envelope.
Three weeks after sending the second reminder, we will send a third reminder via email (Attachment 12).
One month after the third reminder (approximately 3.5 months after sending the survey invitation letters), we will begin telephone follow-up with all non-responding LEAs (Attachment 13). These contacts will include efforts to complete the survey during the call and to prompt nonrespondents to complete the survey via the web.
About 3 weeks after telephone follow-up begins, we will send a fourth reminder email to LEAs who have not responded (Attachment 14).
The final correspondence will be an end-of-study letter (Attachment 15) to nonrespondents to announce the forthcoming closure of the study and make a final appeal to participate. This letter will be sent approximately 12 weeks before the survey is closed.
Throughout the data collection period, respondents will receive a thank-you email or letter, depending on the completion mode (Attachment 16). The text will formally acknowledge receipt of the survey and state that the agency may be contacted for clarification once their survey responses are processed.
Upon receipt of a survey (web or hard copy), data will be reviewed and edited, and if needed, the respondent will be contacted to clarify answers or provide missing information. The hard copy survey will be developed using TeleForm, which will allow the surveys to be scanned in and the data read directly into the same database containing the web survey data. This will ensure that the same data quality review procedures are applied to all survey data, regardless of response mode. The following is a summary of the data quality assurance steps that RTI will observe during the data collection and processing period:
Data Editing. RTI will attempt to reconcile missing or erroneous data through automated and manual edits of each questionnaire. In collaboration with BJS, RTI will develop a list of edits that can be completed by referring to other data provided by the respondent on the survey instrument. For example, if a screening question was left blank, but the follow-up questions were completed, a manual edit could be made to indicate the intended positive response to the screening question. Through this process, RTI can quickly identify which hard copy cases require follow-up and indicate the items that need clarification or retrieval from the respondent.
Data Retrieval. When it is determined that data retrieval is needed, an Agency Liaison (AL) will contact the respondent for clarification. Throughout the data retrieval process, RTI will document the questions needing retrieval (e.g. missing or inconsistent data elements), request clarification on the provided information, obtain values for missing data elements, and examine any other issues related to the respondent’s submission.
Data Entry. Respondents completing the survey via the web instrument will enter their responses directly into the online instrument. For those respondents returning the survey via hardcopy (mail or fax), data will be scanned in once received and determined complete. Once the data has been entered into the database, it will be made available to BJS via an SFTP site. To confirm that editing rules are being followed, RTI will review frequencies for the entered data after the first 10 percent of cases are received. Any issues will be investigated and resolved. Throughout the remainder of the data collection period, RTI staff will conduct regular data frequency reviews to evaluate the quality and completeness of data captured in both the web and hard copy modes.
3. Methods to Maximize Response
Minimizing Non-Response
The CSLLEA has historically achieved high rates of survey response with the 2008 and earlier administrations achieving 99% response rate. BJS and RTI will undertake various procedures to maximize the likelihood that the 2018 CSLLEA reaches a similar level of participation. The largest 7 percent of state and local agencies comprise almost two-thirds of the sworn personnel nationally. It is critical that we obtain responses from these agencies. Recognizing the benefits of their existing rapport with the LEAs, PERF will support follow-up efforts among these agencies. PERF is well suited to conduct this outreach because of their extensive membership roster representing large law enforcement agencies.
The 2014 CSLLEA achieved only an 84% response rate but also employed a significantly longer questionnaire. The 2014 CSLLEA burden was 1 hour for 21 items. Considering survey length likely impacted the overall response rate, the 2018 CSLLEA has been revised to better mirror the 2008 CSLLEA, which achieved over 99% response rate. The 2008 CSLLEA had 6 items and the 2018 CSLLEA will have 7 items. Both instruments have an estimated burden of 30 minutes. Reducing the CSLLEA to just the core items will help increase response rates among agencies with limited resources.
BJS will use a web-based instrument supported by several online help functions to maximize response rates. For convenience, respondents will receive the survey link in an email invitation and a mailed hard copy invitation. A Help Desk will be available to provide both substantive and technical assistance. BJS will supply the Help Desk with answers to frequently asked questions and guidance on additional questions that may arise.3 In addition, the web survey interface is user-friendly, which encourages response and ensures more accurate responses. Because online submission is such an important response method, close attention will be paid to the formatting of the web survey instrument. The online application will be flexible so it can adapt to meet the needs of multiple device types (e.g., desktop computer and tablet), browser types (e.g., Internet Explorer and Google Chrome), and screen sizes. Other features in the instrument will include the following:
Respondents’ answers will be saved automatically, and they will have the option to leave the survey partway through and return later to finish.
The online instrument will be programmed with data consistency checks and automatic prompts to ensure inter-item consistency and reduce the likelihood of “don’t know” and out-of-range responses, thereby reducing the need for follow-up with the respondent after survey submission.
LEAs may also download and print a hard copy version of the survey from the website, or request one from the Help Desk.
To obtain higher response rates and to ensure unbiased estimates, multi-stage survey administration and follow-up procedures have been incorporated into BJS’s response plans. Ensuring adequate response (not just unit/agency response rates, but also item responses) begins with introducing agencies to the survey. This will be accomplished initially through the invitation letter and accompanying documents (Attachments 5-16). Resources available to help the respondent complete the survey (e.g. telephone- or email-based Help Desk support) will be described in those communications. We will provide LEAs with online and fax methods to identify respondents and change the POC assignment if needed. The online and hard copy versions of the instrument will capture the name of the individual who completes the survey to facilitate follow-up later.
Adjusting for Non-Response
With any survey, it is typically the case that some of the selected units (i.e., law enforcement agencies) will not respond to the survey request (i.e., unit nonresponse) and some will not respond to particular questions (i.e., item nonresponse). Weighting will be used to adjust for unit nonresponse in the 2018 CSLLEA. To determine which factors to use in the agency nonresponse weight adjustments, a procedure available in RTI’s SUDAAN software based on the Generalized Exponential Model (GEM) will be used to model the response propensity using information from the ARMS (e.g., agency characteristics such as geography, operating budget, whether officers have arrest powers, etc.) within sampling strata (Folsom & Singh, 2000). Ideally, only variables highly correlated with the outcomes of interest will be included in the model used to reduce potential bias. Given the expected differential response rates by agency type and size, the weighting adjustment procedures will attempt to minimize the bias in the estimates within these domains.
As previously stated and based on the traditional CSLLEA response patterns, an overall response rate of approximately 95 percent is expected. To ensure that nonresponding agencies are not fundamentally different than those that participate, a nonresponse bias analysis will be conducted if the agency-level response rate obtained in the 2018 CSLLEA falls below 80 percent. Administrative data on agency type, size and census region or division will be used in the nonresponse bias analysis. For each agency characteristic, BJS will compare the distribution of respondents to nonrespondents. A Cohn’s Effect Size statistic will be calculated for each characteristic. If any characteristic has an effect size that falls into the “medium” or “high” category, as defined by Cohn, then there is a potential for bias in the estimates. Each estimate will be included in a nonresponse model to adjust weights to minimize the potential for bias in the estimates. In addition to estimating effect sizes, an examination of early and late responders will be conducted. If late responders (i.e., those that take more contact attempts before responding) are significantly different on the key outcomes of interest, that is also an indication of potential bias. Comparison will be made to determine if the potential for bias varies by agency type and size.
4. Testing of Procedures
Through this request, BJS also seeks clearance to conduct cognitive interviews using the draft hardcopy CSLLEA questionnaire (Attachment 1). These tests will focus on the clarity of the instructions and question wording, respondents’ ability and willingness to apply the study definitions when answering the questions, the availability of data needed to provide accurate responses, and the estimated burden associated with participation.
A sample of 96 agencies will be purposefully selected to represent the small and large local, sheriff, and special purpose agencies found among the CSLLEA population; from that sample, we will cognitively test up to 48 agencies. All agencies that are invited to take part in the test will be drawn from the ARMS and be defined as in-scope for the 2018 CSLLEA. The distribution of agencies by key characteristics is shown in Table 5.
Table 5. Expected cognitive interview agency sample and participants, by size and agency type
Category |
Number Selected |
Number of Participant Agencies |
Small (less than 100 FTS) |
|
|
Local police department |
16 |
8 |
Sheriff's office |
16 |
8 |
Special purpose agency |
16 |
8 |
Large (100 or more FTS) |
|
|
Local police department |
16 |
8 |
Sheriff's office |
16 |
8 |
Special purpose agency |
16 |
8 |
TOTAL |
96 |
48 |
Note: FTS = Full-time sworn
Initial contact with 48 agencies (8 from each stratum) will be made by telephone. RTI staff will call and ask to speak with the agency head identified in the LEAR. Once contact is established with the agency head (or a designee), the purpose of the test and the scope of agency involvement will be explained. When an agency is recruited, contact information for a designated respondent will be obtained. If an agency does not choose to participate, a replacement from the same stratum will be selected and outreach will be made to recruit the new agency. The replacement process will be repeated until 8 agencies are recruited from each stratum.
Within 1 week of agency recruitment, a survey package will be sent to the respondent. The packet will include a cover letter explaining the purpose of the test (Attachment 17), a copy of the draft questionnaire (Attachment 1), and a postage paid return envelope. Respondents will be asked to complete the questionnaire within 1 week and return it to RTI. They will also be asked to record the time spent by all agency staff to complete the form; this information will be captured on the questionnaire.
Upon receipt of the completed questionnaire, RTI will contact the respondent by telephone or email to schedule a 30-minute debriefing call. During that call, a member of the project team will conduct a retrospective cognitive interview using a standardized interview guide (Attachment 18). During the debriefing call, data providers will be asked to describe their experience using the questionnaire and to clarify any significant differences between their survey responses and data contained in the ARMS (differences will have been identified by project staff prior to the call.)
Analysis of the test data will include a comparison of the cognitive interview data and previous CSLLEA and LEMAS data to compare reported staffing size, and a review of reasons for observed differences as reported during the debriefing calls. This analysis will identify necessary revisions to the questionnaire. Additionally, if four or more respondents of any given agency type (i.e., local police, sheriffs’ office, or special purpose agency) express confusion or concerns over a particular item that is applicable to the agency, this item will be flag for review and removal. We anticipate that this process will be used for the fully sworn versus limited sworn items and may lead us to drop the limited sworn categories from items 6 and 7. Burden associated with the pilot test forms will also be assessed to determine if questions need to be removed or modified. All analysis will be done by agency size and type.
A report of findings and recommendations will be submitted to BJS after the testing is completed. BJS will consider the recommendations and instruct RTI regarding revisions to the survey questionnaire. Any modifications made to the survey instrument will not increase respondent burden and likely will decrease overall burden. Using the revised version of the questionnaire, programming specifications will be developed and the web survey (and hard copy questionnaire) will be produced. BJS will inform OMB of the cognitive testing results before launching the full data collection.
Contacts for Statistical Aspects and Data Collection
BJS Contacts:
Shelley S. Hyland, Ph.D.
Statistician
Bureau of Justice Statistics
202-305-5552
Kevin M. Scott, Ph.D.
Law Enforcement Statistics Unit Chief
Bureau of Justice Statistics
202-616-3615
RTI Project Staff:
Tim Smith
RTI International
Travis Taniguchi, PhD
RTI International
References
Folsom, R.E., & Singh, A.C. (2000). The generalized model for sampling weight calibration for extreme values, nonresponse, and poststratification. In Proceedings of the American Statistical Association’s Survey Research Methods Section, 598-603.
Medway, R.L., & Fulton, J. (2012). When more gets you less: a meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly, 76(4), 733-746.
Millar, M.M., & Dillman, D.A. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly, 75(2), 249-69.
National Research Council. (2009). Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Panel to Review the Programs of the Bureau of Justice Statistics. Robert M. Groves and Daniel L. Cork, eds. Committee on National Statistics and Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Reaves, B. (2011). Census of State and Local Law Enforcement Agencies, 2008. Washington, D.C.: Bureau of Justice Statistics.
1 The ARMS contains agencies that are in-scope for the CSLLEA as well as agencies that are not in-scope.
2 A delayed paper survey mailing will be done with all agencies except for the 275 tribal agencies in the CSLLEA frame. Due to the sporadic access to internet service on tribal lands, hard-copy survey submission will be the primary method for tribal agencies. Therefore, the paper survey will be included in the initial mailing with a secondary option of web submission for these agencies.
3 This document will be created after cognitive testing of the 2018 CSLLEA.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Hyland, Shelley |
File Modified | 0000-00-00 |
File Created | 2021-01-21 |