TO: Shelly Wilkie Martinez
Office of Statistical and Science Policy
Office of Management and Budget
THROUGH: Lynn Murray
Department Clearance Officer
Justice Management Division
Jeri Mulrow
Acting Director
Bureau of Justice Statistics
Michael Planty, Ph.D.
Deputy Director
Bureau of Justice Statistics
FROM: Rachel Morgan, Ph.D.
Bureau of Justice Statistics
DATE: April 4, 2016
SUBJECT: BJS request for OMB clearance to administer a web-based, online survey to state and local Adult Protective Services agencies, in support of the Assessment of Administrative Data on Elder Abuse, Mistreatment and Neglect (EAMN) project, through the generic clearance agreement granted to BJS (OMB Number 1121-0339)
The Bureau of Justice Statistics (BJS) is requesting clearance to conduct a web-based, online survey to collect data on the availability of administrative data on elder abuse, mistreatment, and neglect (EAMN) from state and local Adult Protective Services (APS) agencies. This pilot study will measure the ability to use APS data systems to report on key indicators of victimization and criminal victimization.
The term EAMN refers to a wide range of civil and criminal violations against elderly victims, some of whom may be physically or mentally vulnerable. EAMN may take the form of physical or sexual violence, emotional or psychological abuse, financial or material exploitation, caregiving neglect, or abandonment. EAMN is a growing concern as the baby boomer generation ages and the number of elderly persons in the U.S. population increases.
However, national, uniform, comparative data on the incidence and prevalence of and responses to EAMN are not available. This is because the response to EAMN has mainly occurred at the state and local levels—primarily through Adult Protective Services (APS) agencies, but also through local law enforcement, state Attorneys General, state healthcare licensing agencies, and state and local long-term care ombudsmen—and the federal role in defining and responding to EAMN has been limited. State and local data are difficult to combine across jurisdictions because of variation across many factors, including the states’ legal definitions of EAMN, reporting mechanisms for identifying cases, administrative structures for investigating and responding to reported cases, and systems for storing case information.
As one of the first steps toward the creation of an ongoing, national data collection on elder abuse, mistreatment and neglect, the Bureau of Justice Statistics (BJS) and its grantee the Urban Institute (Urban) are conducting an assessment of administrative data collected by APS agencies. The project will document currently available administrative data about EAMN as reported to APS agencies and assess the feasibility of utilizing those data to report on key indicators of victimization and criminal victimization. In particular, this project is concerned with distinguishing between criminal and non-criminal acts.
BJS chose to focus on administrative data collected by APS for several reasons. First, in the vast majority of states, APS agencies are the official organizations to which the majority of initial reports of suspected abuse, neglect or exploitation of elderly individuals are made. For these potential victims, APS agencies are the first responders. Second, many of the cases referred to APS fall into a grey area of victimization not captured by other statistical data collections, somewhere between the “dark figure” of unreported victimizations and official offenses known to law enforcement. While a percentage of APS referrals do not rise to the level of “criminal” victimization, that percentage is not known because data have not been available to make that assessment. Similarly, the proportion of APS cases referred to the criminal justice system for further processing is not well understood, either, due again to a lack of data as well as the need for a comprehensive assessment across APS agencies of the different ways in which APS engages with law enforcement and local prosecutors. Finally, understanding how existing APS data systems could serve as an alternative to other sources, such as police records and victimization surveys, in order to provide a more complete picture of victimization for this often hard to reach population is important.
This pilot study is a request for further developmental work focusing on APS agencies in the 50 states and Washington, DC. The goal of this study is to determine if a national data collection examining elder abuse reported to APS is feasible. The need for these data has been acknowledged by a number of sources, including the Federal Elder Justice Coordinating Council1 established as part of the Elder Justice Act, the Government Accountability Office,2 the Senate Special Committee on Aging,3 and the National Research Council.4
The main objectives of the current EAMN project are to 1) determine, in collaboration with stakeholders and a diverse selection of experts across the field of elder justice, a set of core indicators on which APS staff should collect data for each case of suspected abuse reported to them; 2) conduct telephone interviews with state-level APS representatives in every state and the District of Columbia to determine the location(s), level of centralization, and coverage of APS administrative data in each state; 3) develop a taxonomy for counting incidents of elder abuse, distinguishing between acts that are criminal and non-criminal in nature; 4) assess existing APS administrative data on EAMN cases against the set of core indicators and gauge comparability across jurisdictions; and 5) develop a broad understanding of APS agencies’ current practices related to detecting, reporting, and collecting data on alleged cases of EAMN to facilitate data aggregation and interpretation.
The results from this pilot study will be used to inform national efforts to collect information from APS to produce key indicators of victimization. BJS plans to disseminate the results of this study in a joint report with the Urban Institute. This research and developmental report will include what we learned about EAMN and APS data systems during the course of the five objectives (listed above). The report will include an analysis of the capacity of APS data systems to provide particular data elements. More specifically, the results of this developmental work will determine if it is feasible for BJS to build a national data collection with the goal of producing national estimates of EAMN with the information currently available in APS data systems.
To date, objectives one, two, and three (listed above) have been completed. A list of key indicator statistics was developed with stakeholders and experts in the field of elder justice. These indicators were developed in a June 2013 meeting with stakeholders within the federal interagency working group on elder abuse, who confirmed their importance to answering key substantive questions about the extent of EAMN in the United States. The key indicator statistics that were developed included statistics collected at multiple stages, or time points, in the lifecycle of an APS case: initial reports of suspected abuse, investigations opened, and cases substantiated.
Key indicators about initial reports of suspected abuse reflect all potential victimizations, indicators about investigations reflect all reports deemed appropriate for APS to pursue (based on case and/or jurisdictional criteria), and indicators about cases substantiated reflect those reports deemed to be abuse. It is necessary to delineate between data collected at the report and investigation stages because of how APS agencies gather data in practice. In some APS agencies, data on reports are gathered and maintained separately from data on investigations. Even when APS agencies track reports and investigations in an integrated data system, they collect limited information when receiving the initial report and gather richer data over the course of the investigation. As such, the assessment of what is available to construct the key indicators depends on which stage the key indicator reflects, and whether a given type of information was collected at that stage. Therefore, if BJS is interested in describing the nature of reports of elder abuse, there may be less information available, whereas describing the nature of investigations of elder abuse might be more comprehensive.
The semi-structured telephone interviews with state-level APS representatives in every state and the District of Columbia were used to determine the location(s) of APS, level of centralization, and coverage of APS administrative data in each state (Attachment A). Results of this work indicated that 42 states administer APS at the state level, seven states administer APS at the county level, one state reported a hybrid of state and county APS administration, and one state reported regional administration.
Results of this work indicated that most states had highly centralized APS data collection systems and used a single data system across their local jurisdictions to record information throughout the lifecycle of an APS case, from the initial report of suspected abuse through case findings. Given the high level of data centralization, it was determined that future studies could be conducted mainly with state-level APS respondents. However, future studies should also include local-level respondents from the five states where APS data collection systems are more decentralized: California (all 58 counties), Delaware (all 3 counties), Idaho (all 6 regions), New Jersey (all 21 counties), and New York (1 city, New York City, as it maintains data independently from the rest of the state).
Developing a taxonomy for counting criminal and non-criminal acts will guide the pilot study’s assessment on the extent to which APS data systems can be used to generate key indicators of victimization and criminal victimization. The taxonomy presents a working definition of elder abuse so that research and statistical data may be collected in a uniform manner across states and localities with different legal and programmatic definitions of elder abuse (Attachment B).
BJS is requesting a generic clearance to complete objective four (listed above) to administer a web-based online survey to 140 APS agencies (OMB Number 1121-0339). These agencies are located in each of the 50 states and Washington, DC. The primary goal of this pilot study is to establish the feasibility of using administrative data from state and local APS agencies to report key indicator statistics about reported (i.e., alleged), investigated, and substantiated cases of EAMN.
The web-based survey will –
Compile detailed information on APS agencies’ data collection practices in 2015, including database structure, units of count, specific data elements collected, electronic data entry practices, and other pertinent information about APS administrative data. Results will help to identify which of the key indicator statistics APS agencies may be able to provide.
Collect contextual information about APS agencies’ definitions of abuse and scope of responsibility. With this information, BJS will assess the comparability of case types and investigative scope across agencies to gauge the extent to which data can be appropriately aggregated across agencies to develop national estimates of elder abuse victimization.
The web-based survey builds on information gained from previous developmental work for this study through telephone interviews with state APS representatives, conducted between December 2013 and February 2014 under the BJS generic clearance agreement (Attachment C) to define the universe of APS data collection entities and determine whether a sampling strategy would be needed. All 50 state APS representatives and the District of Columbia provided information on the highest organizational level at which uniformly collected data on EAMN cases are maintained in each state and clarifying for the project team the number of individual APS organizational units that have primary responsibility for collecting case data. That information was used to construct the universe of respondents for the online survey.
BJS learned that APS administrative data are centralized at the state level in most states and that the data includes the case-level cohorts needed to construct key indicator statistics: initial reports of suspected abuse, investigations conducted and closed, and cases substantiated by APS. Using this knowledge, BJS decided to administer the survey at the state level if all three cohorts of APS administrative data reside at the state, even if decision-making about case responses occurs locally. In states where all three cohorts of data do not reside at the state level, the survey should be conducted at both the state and local levels. Information collected from the state APS agency will be critical to understanding statewide policies that govern the scope of local APS program activities and data collection.
BJS decided that the web-based survey should be administered at the state level in all 50 states, and in four states, it should also be administered at the regional or county levels. Additionally, New York City and Washington, DC will require separate surveys. The total number of respondent agencies, shown in Table 1 below, is estimated to be 140. The actual target number may change slightly if some counties consolidate their APS operations, as smaller counties sometimes do. When finalizing the roster of county and other local-level contacts, we will learn about any consolidations and adjust accordingly.
BJS concluded that a respondent universe of 140 APS data collection entities is sufficiently small to survey the full universe rather than sampling. Data collection from all 140 APS data collection entities will provide a complete, nationwide understanding of APS data collection capabilities, whereas sampling would introduce a degree of sampling error with no appreciable gains in efficiency. Developing a representative sample is especially challenging in this context because APS agencies differ on a number of important operational dimensions that have implications for their data capacity, such as jurisdiction over long-term care facilities and other institutional settings. In fact, this survey is designed to measure and document those operational and data capacity differences.
BJS will distribute and collect one survey per APS agency for each of the 140 agencies listed in Table 1. The appropriate contact within each state APS agency was identified during the previous developmental work for this study when the Urban Institute interviewed state APS representatives by telephone. In most cases, the individuals who participated in the telephone interviews confirmed that they would be the most knowledgeable regarding their agency’s APS data system and should be the contact for the web-based survey. Local-level respondents will be identified through the same state-level APS contacts who agreed to provide contact lists when we interviewed them by telephone. For example, respondents from the California state APS program agreed to provide a list of county APS contacts. As noted above, both state- and local-level contacts will be surveyed in those states requiring local-level interviews.
Table 1. Respondent Universe for Web-Based Survey
The full survey questionnaire is provided as Attachment C and selected screenshots of the online version of this instrument are shown in Attachment D. The survey begins with the respondent’s contact information, and then is organized into three substantive sections.
Section A. About Your Agency’s Recordkeeping and Data Reporting Practices, asks about the units of count that each respondent’s APS recordkeeping system maintains; whether they maintain data electronically; and the agencies’ data entry and quality assurance procedures. Questions to establish the units of recordkeeping and data reporting are of paramount importance. Stakeholders identified both person-level and case-level key indicator statistics as being important to the field. Responses to these questions will establish the extent to which APS agencies can generate statistical data at different levels of count. Questions about electronic data availability and data quality will inform BJS about agencies’ capacity to subset relevant elder victimizations from their overall caseload and report reliable statistics in keeping with how the key indicators are defined.
Section B. Information Gathered about Elder Abuse Reports and Investigations, asks about the individual data elements that each agency collects in its data system. Questions in this section of the questionnaire focus on data collection elements in the three stages within the APS caseload: initial reports, investigations opened, and cases substantiated. Determining what information is available for each of these stages, or time points, will help BJS assess APS agencies’ ability to be used to generate key indicator statistics of interest to federal stakeholders.
Respondents are presented with a list of the data elements needed to generate these key indicators and asked which pieces of information they gather electronically or on paper in the course of investigating reported EAMN. For each piece of information they report gathering, they are asked a follow-up question about when it is collected. (Note that the instrument incorporates skip pattern logic so that follow-up questions are only asked if they are applicable to a given respondent based on prior survey responses.) Collectively these questions address electronic data availability and the availability of fields needed to construct key indicators. The survey asks about the following domains of information:
Victim characteristics: Respondents are asked if their data systems collect information on victims’ personal identifiers; demographics; vulnerability and disability status; housing and living arrangements; and prior victimization history.
Alleged perpetrator characteristics: Respondents are asked if their data systems collect information about alleged perpetrators’ personal identifiers; demographics; relationship to the victim; vulnerability and disability status; and prior perpetration history.
Reporter characteristics: Respondents are asked about the source of the abuse report, including whether the alleged abuse was referred by the criminal justice system and APS’ decision-making on whether to open an abuse investigation.
Incident characteristics: Respondents are asked if their data systems collect information about the time and place of the suspected abuse; the general type of abuse alleged; specific acts committed against the victim; the severity of injuries sustained by the victim, including any need for medical care and/or financial losses; and case outcomes, including abuse substantiation, referral to the criminal justice system, and criminal justice system case outcomes, such as arrest, prosecution, and conviction.
Section C. Elder Abuse Definitions and APS Agency Responsibilities, collects information on the respondent agency’s definition of abuse and its scope of responsibilities. There is great diversity in how states laws define abuse, and states delineate investigative responsibility differently, so that APS may investigate some but not all types of abuse. These operational differences will affect BJS’s ability to aggregate APS data across states in an “apples-to-apples” fashion. Responses to these questions will give BJS an understanding of which abuse categories consistently fall under the purview of APS agencies, therefore lending themselves to nationwide statistical data collection across the majority of states. Other abuse categories that are collected by fewer states will require BJS to develop alternate strategies for estimating the prevalence of reported abuse. Additionally, Section C asks about policies for referring abuse cases to law enforcement agencies. These questions address the extent to which APS and law enforcement data sources may overlap.
As preparation for the pilot study and to test the utility of the interview protocol, the Urban Institute pretested the online version of the survey with six APS agency respondents in December 2015. Pretest respondents included both state and locally administered program types. Four state-level respondents—California, Florida, Idaho, and Minnesota—were selected from earlier telephone interview respondents who expressed a high level of interest and willingness to help project efforts. Additionally, two county-level APS agencies from California participated—Sierra and Sacramento counties—as most local-level surveys will be conducted with county APS agencies in California (see Table 1).
This initial pretest served to gauge respondent burden, question language and ordering, and the functionality of the web-based survey instrument. Urban revised the online instrument in response to the following findings from the pretest.
Respondent burden. The average, self-reported time for respondents to complete the pretest version of the survey was 45 minutes. Two respondents reported 25-30 minutes, two reported 45 minutes, one reported 60 minutes, and one declined to answer.5 Urban streamlined and deleted survey questions as described below to reduce the time to approximately 30 minutes.
Clarification on reference period. Survey instructions and questions were edited throughout the instrument to have respondents focus on data collection practices in 2015. For example, question A1 now specifies, “Did your agency focus exclusively on elder abuse between January 1 and December 31, 2015? If your agency went through substantial changes in 2015, please report on your practices during the majority of the year.”
Question language and ordering. Pretest respondents suggested changes to the wording of specific questions and response options. Substantive edits, other than minor re-wordings, and consolidations in response to these comments are described below. These changes reduce respondent burden by streamlining and rephrasing questions so they are more intuitive and reflective of APS operational realities.
Section A, item 2: We added one question and one instruction to enable state-level respondents to report on the investigative and data collection activities of the local agencies they oversee. Some pretest respondents noted difficulty because their agencies do not investigate cases or collect data directly, but instead oversee other agencies’ investigations and serve as a data repository.
New question: “What was your agency’s role in investigating reports of suspected elder abuse in 2015?” Response options are: we conducted investigations; we had oversight of other agencies that conduct investigations; and, we did both.
New instruction: “If your agency did not investigate cases directly, please answer questions about the investigation process from the perspective of the local office of your agency that directly investigated cases.”
Section B, items 1-15: We consolidated each item into two parts, a main question with expanded response options and a single follow-up, asked only when applicable. Each item in the pretested version had consisted of three parts, a main question about whether data elements are gathered and two separate follow-ups about how and when they are stored, which had caused confusion for pretest respondents. Our changes add clarification in order to avoid such confusion.
The main question now asks: “What pieces of information about <domain> did your agency gather in the course of an investigation responding to suspected elder abuse in 2015? By “gathering,” we mean collecting and recording information in any form, including on paper, in narrative form, or in a database field. Please respond “yes” to all the items that were available on your data collection forms, even if they were not always consistently filled in.” This is to address a number of needs expressed by pretest respondents: (a) Several respondents requested clarification on the term “gathering” and expanded response options for how data elements are stored; (b) One respondent assumed she should only report on those data elements that were structured and query-able in her system; (c) Others asked to distinguish between electronic data that were in structured data fields as opposed to notes fields; and (d) Additionally, respondents asked for clarification on data quality parameters, so we edited the question to establish BJS’ intent to document the capacity of APS systems to collect information. Revised response options offered now are:
Yes, electronically in a structured data field response
Yes, electronically as free text
Yes, electronically, but don’t know the field type
Yes, but on paper only
No, this is not recorded
A single follow up question is asked about those data elements gathered by the respondent. “Following up, were any of the items you gather about <domain> collected as part of the initial report, before beginning an investigation? Response options are yes or no.
Twenty sub-items originally asked as the follow-up to B15 were deleted. Pretest respondents noted these were not relevant to the initial report of suspected abuse.
Items B2 and C8 on victim vulnerability were edited to remove the adjective “serious” from descriptions of disability (e.g., “serious” difficulty walking, hearing, etc.). One pretest respondent indicated difficulty answering these questions because of lack of clarity around the use of the word “serious.” As an agency, APS records whether a client has difficulty in particular areas of daily living without subjective qualifiers. (Note: The original response options were from the National Crime Victimization Survey, which collects self-reported information from individuals.)
Item C26 on referring substantiated cases to the criminal justice system was edited. Pretesters recommended adding a response category for “both the police and prosecutor’s office” to reflect agency practice.
Functionality of the web-based survey instrument. Urban added missing data options to each item to minimize data interpretation problems when the survey data are analyzed. Each item now has additional “don’t know,” “not applicable,” and “decline to answer” response options. This allows the survey software to force entry on all items, while also giving respondents the option to refuse any questions they are uncomfortable answering. In addition, this forced answer design helps the respondents move more quickly through the survey, as it allowed Urban to program additional skip patterns for later questions when answers to earlier ones were not applicable or not known.
Sampling is not needed for this online survey, as the total universe of APS agencies that collect uniform administrative data is 140 (see Table 1, above). All of these 140 agencies will be recruited for the web-based survey. Understanding the uniformity and comparability of APS case data across the U.S. necessitates collecting detailed information about APS administrative data from each of the 50 states and the District of Columbia.
BJS will conduct one survey per APS agency. The most knowledgeable individual in the agency was identified through earlier telephone interviews and will be asked to complete the survey; he or she may opt to delegate certain items to colleagues within the agency who may be better suited to provide the requested information. The study’s invitation letter, Attachment E-1, explains how respondents may share their login information with others in their agency.
The survey will be self-administered on the web. The survey questionnaire (Attachment C) has been programmed into Qualtrics software, a data collection tool that automates the skip patterns built into the instrument and collects discrete, categorical responses in addition to longer text responses. The frequent use of structured response categories throughout the instrument will minimize response time, and the software will be programmed so that respondents can skip questions that are not applicable, based on prior responses. The self-administered nature of the instrument will minimize the burden on respondents because it will allow them to complete the survey at a time of their choosing. Additionally, the software allows respondents to complete the survey in multiple sessions, and respondents have the discretion to delegate portions of the survey to others in their agencies, as they deem appropriate.
As noted earlier, specific state-level respondents have already been identified through the previous development work conducted for this study. These state-level respondents have agreed to provide contact information for the local-level APS interviews that are planned in states that require regional-, county-, and city-level surveys.
The Urban Institute will field the survey for about 8-10 weeks, using the following recruitment strategy with periodic reminders to ensure survey completion. Urban has successfully used this strategy in other, similar projects to achieve high response rates.
To solicit participation for this pilot project, Urban Institute staff will first conduct outreach by mail using an initial, hard-copy invitation letter (Attachments E-1 and E-2) customized to each respondent. This letter describes the project purpose, gives a summary of the information to be collected through the survey, and provides a link to the online survey, along with a unique username for each respondent. The invitation letter also specifies the estimated time to complete the survey and that participation is voluntary, but emphasizes the importance of complete participation to benefit the field, and notes that respondents may delegate portions of the survey to others in their organization, as appropriate.
A follow-up email invitation (Attachment F) containing much of the same information will be sent to respondents within one week of the hard-copy letter to ensure that the letters have been received. The email will facilitate easy access by including a clickable link to the online survey along with the respondent’s unique username for logging in to the survey.
Email reminders to non-responders (Attachment G) will begin one week following the email invitation. These emails will be sent approximately weekly until the survey is completed or the respondent actively declines participation by notifying the project team. Each email will include a clickable link to the online survey along with the respondent’s unique username for logging in to the survey.
A postcard reminder (Attachment H) will be mailed to non-responders three weeks following the email invitation.
Telephone calls to non-responders will begin five weeks after the email invitation, using the script provided as Attachment I. Similar follow-up calls will be made approximately weekly until the survey is completed or the respondent actively declines participation. In the course of these telephone calls, project staff will provide assistance with the survey instrument as needed and requested. For example, respondents may wish to receive a PDF copy of the instrument for reference as they complete the online survey. In rare cases and at the request of the respondent, project staff may administer the survey by telephone and record the responses in Qualtrics.
BJS does not anticipate a need for routine follow up with respondents once the survey is completed. On occasion, there may be a need to contact respondents for clarifications, but the use of structured response categories and skip patterns within Qualtrics minimizes the need for data cleaning. While a high response rate of at least 80 percent is anticipated, a unit and item nonresponse bias analysis will be conducted after data collection closes.
Note that all data collection and recruitment protocols for this work have been approved by UI’s Institutional Review Board (Attachment J).
We estimate the total respondent burden as 70 hours. The survey will be administered to 140 respondents. With the simplifications incorporated after the initial pretesting, we estimate the average self-administration time per respondent as 30 minutes. Project staff who took the revised survey for quality assurance purposes reported completion times of 25 and 26 minutes each.
Efforts to Identify Duplication
This pilot study focuses on improving current measures of criminal elder victimization in alignment with BJS' mission to collect and enhance national criminal justice statistics. A separate effort—currently being conducted by the Office of the Assistant Secretary for Planning and Evaluation and managed and funded by the Administration for Community Living, both in the Department of Health and Human Services—is developing a broader National Adult Maltreatment Reporting System (NAMRS). The NAMRS is substantially different from this project in that its scope of coverage is wider, including all vulnerable adults regardless of age (e.g., disabled individuals aged 18-59 as well as those over 60) and a substantial number of adults who are reported for self-neglect instead of abuse (i.e., unable to care for themselves but not abused, neglected, or exploited by another). Further, the NAMRS is geared toward understanding social service needs and related APS resource needs. By contrast, the proposed data collection seeks to assess APS data systems’ capacity to report specifically on elder victimization and intersections with the criminal justice system.
The surveys are not intended to collect information on individuals or ask for information that would otherwise be considered sensitive in nature. As such, the activities associated with this task are not considered human subjects research. The beginning of the survey contains an informed consent specifying that participation in the survey is voluntary and that respondents may decline to answer any and all questions and may stop their participation at any time.
Information collected from the surveys will be stored on the Urban Institute’s computer network that resides behind Urban’s firewall. Because the interview elicits factual information about program policies and operations from state and local APS representatives in their professional capacities, Urban’s IRB has determined that the information is neither private nor sensitive.
Questions regarding any aspect of this project can be directed to:
Rachel Morgan, Ph.D.
Statistician
Bureau of Justice Statistics
U.S. Department of Justice
810 7th Street NW
Washington, DC 20531
Office Phone: 202-616-1707
E-Mail: Rachel.Morgan@ojp.usdoj.gov
Key findings and recommendations from wave 1 telephone interviews with state-level Adult Protective Services (APS) representatives
What is elder abuse? A taxonomy for collecting criminal justice research and statistical data
Online survey of state and local APS data collection practices in 2015
Selected screenshots from web-based survey data collection tool
Initial invitation letter (Attachment E-1) with one-page project description (Attachment E-2)
Initial invitation email
Follow-up email reminder
Follow-up postcard reminder
Follow-up telephone call script
Letter of project approval from Urban Institute IRB
1 Department of Health and Human Services Elder Justice Coordinating Council. (2015). 2012-2014 Report to Congress. Retrieved from http://www.aoa.acl.gov/AoA_Programs/Elder_Rights/EJCC/docs/EJCC-2012-2014-report-to-congress.pdf
2 US Government Accountability Office. (2011). Elder justice: Stronger federal leadership could enhance national response to elder abuse (GAO-11-208).
3 U.S. Senate Special Committee on Aging Hearing (2007). Abuse of our elders: How can we stop it. Washington, DC. Retrieved from http://babel.hathitrust.org/cgi/pt?id=pst.000063513912;view=1up;seq=2
4 National Research Council. (2003). Elder mistreatment: Abuse, neglect and exploitation in an aging America. Washington, D.C.: The National Academies Press. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/22812026
5 Unfortunately, the Qualtrics time stamps were not a reliable indicator of burden because the start time is based on a respondent’s initial login to the survey, but respondents are not required to complete the survey in one sitting, and may leave the survey window open as they attend to other responsibilities.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | coopera |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |