Download:
pdf |
pdfDownloaded from qshc.bmj.com on 8 December 2008
Adverse-event-reporting practices by US
hospitals: results of a national survey
D O Farley, A Haviland, S Champagne, A K Jain, J B Battles, W B Munier and J
M Loeb
Qual. Saf. Health Care 2008;17;416-423
doi:10.1136/qshc.2007.024638
Updated information and services can be found at:
http://qshc.bmj.com/cgi/content/full/17/6/416
These include:
References
This article cites 22 articles, 8 of which can be accessed free at:
http://qshc.bmj.com/cgi/content/full/17/6/416#BIBL
1 online articles that cite this article can be accessed at:
http://qshc.bmj.com/cgi/content/full/17/6/416#otherarticles
Rapid responses
You can respond to this article at:
http://qshc.bmj.com/cgi/eletter-submit/17/6/416
Email alerting
service
Topic collections
Receive free email alerts when new articles cite this article - sign up in the box at
the top right corner of the article
Articles on similar topics can be found in the following collections
Patients
(1457 articles)
Notes
To order reprints of this article go to:
http://journals.bmj.com/cgi/reprintform
To subscribe to Quality and Safety in Health Care go to:
http://journals.bmj.com/subscriptions/
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Adverse-event-reporting practices by US hospitals:
results of a national survey
D O Farley,1 A Haviland,1 S Champagne,2 A K Jain,3 J B Battles,4 W B Munier,4
J M Loeb2
See Commentary, p 400
1
RAND Corporation, Pittsburgh,
Pennsylvania, USA; 2 The Joint
Commission, Oakbrook Terrace,
Illinois, USA; 3 RAND
Corporation, Arlington, Virginia,
USA; 4 Agency for Healthcare
Research & Quality (AHRQ),
Rockville, Maryland, USA
Correspondence to:
Dr D O Farley, RAND
Corporation, 4570 Fifth Avenue,
Suite 600, Pittsburgh, PA 15213,
USA; donna_farley@rand.org
Accepted 17 August 2008
ABSTRACT
Context: Little is known about hospitals’ adverse-eventreporting systems, or how they use reported data to
improve practices. This information is needed to assess
effects of national patient-safety initiatives, including
implementation of the Patient Safety and Quality
Improvement Act of 2005 (PSQIA). This survey generated
baseline information on the characteristics of hospital
adverse-event-reporting systems and processes, for use
in assessing progress in improvements to reporting.
Methods: The Adverse Event Reporting Survey, developed by Westat, was administered in September 2005
through January 2006, using a mixed-mode (mail/
telephone) survey with a stratified random sample of
2050 non-federal US hospitals. Risk managers were the
respondents. An 81% response rate was obtained, for a
sample of 1652 completed surveys.
Results: Virtually all hospitals reported they have
centralised adverse-event-reporting systems, although
characteristics varied. Scores on four performance
indexes suggest that only 32% of hospitals have
established environments that support reporting, only
13% have broad staff involvement in reporting adverse
events, and 20–21% fully distribute and consider
summary reports on identified events. Because survey
responses are self-reported by risk managers, these may
be optimistic assessments of hospital performance.
Conclusions: Survey findings document the current
status of hospital adverse-event-reporting systems and
point to needed improvements in reporting processes.
PSQIA liability protections for hospitals reporting data to
patient-safety organisations should also help stimulate
improvements in hospitals’ internal reporting processes.
Other mechanisms that encourage hospitals to strengthen
their reporting systems, for example, strong patient-safety
programmes, also would be useful.
In its report, To Err Is Human: Building a Safer Health
System, the Institute of Medicine highlighted the
importance of adverse-event-reporting as a foundation for patient-safety improvement and identified
the fragmented nature of reporting as a significant
barrier to achieving improvements.1 Despite growing
activity to improve patient-safety reporting and
practices, little is documented systematically about
the extent to which individual healthcare organisations have systems for reporting errors and adverse
events, or how they use the reported data for actions
to implement safer practices.2 Errors are defined as
actions or inactions that lead to deviations from
intentions or expectations. Adverse events are
occurrences during clinical care that result in
physical or psychological harm to a patient or harm
to the mission of the organisation.
416
This paper reports results of a national survey of
hospitals that characterises the extent to which US
hospitals have adverse-event reporting systems and
how they use them. The survey was administered
collaboratively by the RAND Corporation and the
Joint Commission.
Because standardised data on reported adverse
events have been lacking, it has not been possible
to detect and assess safety issues at the national
level or to track trends over time.1 3 With enactment of the Patient Safety and Quality
Improvement Act of 2005 (PSQIA), the US
Congress established a structure and process
intended to reduce the fragmentation of information on reported patient-safety events and issues.4
The PSQIA provides for national certification of
patient-safety organisations (PSOs), to which
healthcare providers can report data and other
patient-safety information, and it establishes confidentiality and protection from legal discovery for
information reported by participating providers.
No formal models for hospital adverse-eventreporting systems have been published, but many
sources identify the essential components of an
effective system. A hospital’s reporting system
should be one element of a cohesive patient-safety
programme that includes identification of errors
and occurrences through reporting, and establishment of patient-safety infrastructure, processes
and climate that support reduction in adverse
events.5–9 A reporting system should be able to
capture both adverse events and near misses, define
adverse events precisely to prevent under-reporting
or misperceptions, and link errors to patient and
team characteristics.6 10 11 The system also should
be linked to organisational leaders who can act on
reports.6 12 A broad range of staff throughout the
hospital should participate in reporting, with
confidentiality or anonymity provided for those
who report occurrences—preferably confidentiality
to allow discussion of occurrences with the
reporting persons.6 8 13 14
These principles also apply to external adverseevent-reporting systems. The World Health
Organization established guidelines that identify
the characteristics of successful adverse-eventreporting systems.15 Such systems should be nonpunitive, confidential, independent, analytically
capable, systems-oriented and responsive in developing solutions. Several countries have national
reporting systems with many of these features.
England and Wales NHS, The Netherlands,
Slovenia and Australia have voluntary systems,
and the Czech Republic, Denmark, Ireland and
Sweden have mandatory systems.9 15–17
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Funded by the Agency for Healthcare Research and Quality
(AHRQ), the RAND Corporation and the Joint Commission
administered the Hospital Adverse Event Reporting Survey
(AERS) for a national sample of hospitals, to establish an
information base on the characteristics and use of reporting
systems operated by US hospitals. The goal was to establish
estimates of the percentage of hospitals that have such systems,
the status of reporting practices, and how information on
reported occurrences is disseminated and used for practice
improvement. The survey results would establish baseline
information for two policy-related purposes—to enable tracking
of trends in improvements for adverse-event-reporting practices
across the country and to assess effects of implementation of
the PSQIA on hospitals’ internal reporting processes.
To reduce adverse events for hospital patients, hospitals need
to have both effective reporting systems that identify risks and
hazards in their systems and effective performance improvement processes that act on reported information. This paper
reports survey results on the first of these steps, estimating the
extent to which hospitals currently collect and disseminate the
occurrence data needed to inform effective performance
improvement. Drawing from the published information
described above about the features of effective reporting
systems, we identified four system components that should
be in place for effective operation of hospital adverse-event
reporting, which were used to frame our analysis:
c a supportive environment that protects the privacy of staff
who report occurrences;
c broad reporting to the system by a range of types of staff;
c timely distribution of summary reports that document
reported occurrences for use in action strategies to prevent
future adverse events from occurring; and
c senior-level review and discussion of summary reports by
key hospital departments and committees for policy
decisions and development of action strategies.
DESIGN AND METHODS
Westat developed and pilot-tested the AERS questionnaire for
the US DHHS Quality Interagency Coordination Task Force,
including assessment of the need to collect data from one or
more types of personnel to obtain valid and reliable results.18
Questions covered in the survey included whether hospitals
collect information on adverse events, what information is
collected, who reports occurrences, how their privacy is
protected and uses of the data collected.
In testing the survey, Westat did cognitive interviews with
risk managers and department heads, which guided terminology, response options and several aspects of survey design. A
draft instrument was reviewed by American Hospital
Association staff, resulting in substantive revisions. Test results
suggested that respondents understood the questions being
asked, and the questions obtained the desired information.
Based on field test data collected from hospital risk managers
and up to six department heads (eg, nursing, medicine,
laboratory), Westat found that most of the adverse-event
reports are sent to the risk manager, although many are not.18
Westat concluded that a survey of the risk managers could
provide a relatively complete picture of adverse event-reporting
systems in hospitals, … focusing on the main reporting vehicle
for the hospital, describing reporting for the majority of adverse
events, … [and] would also give a picture of the types of events
that are not reported to their systems.
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
Where more detailed information on reporting patterns and
practices might be needed, these results can be supplemented
with departmental manager surveys.
Our goal was to understand the status of hospitals’ main
vehicles for reporting adverse events. Therefore, based on
Westat’s pilot test results, the AERS survey questionnaire for
risk managers was used with minor modifications made to
improve clarity and data completeness. Changes made to a
small number of questions on the Westat survey included
editing changes to clarify terminology or wording, adding
response options to obtain more complete data, reordering
response options to improve logic flow and adding open-ended
response options for two items. In addition, one question was
deleted that collected duplicative information, and two new
questions were added about whether the hospital had a patientsafety programme.
We administered the AERS to risk managers at a nationally
representative sample of 2050 non-federal hospitals in the US in
September 2005 to January 2006. The hospital risk manager to
be surveyed was identified by initial phone contact to each
hospital in the sample. The survey was mailed to participants,
followed by telephone follow-up interviews for those who did
not complete the mail survey.
The sampling frame consisted of 5517 non-federal hospitals in
the 2003 database of the American Hospital Association,
excluding those in southern portions of Louisiana and
Mississippi. Hurricane Katrina occurred at the time we went
into the field for survey data collection, affecting hospitals in
those areas. We dropped 67 hospitals in southern Louisiana and
Mississippi from our original sample and replaced them with
additional randomly sampled hospitals in the same strata.
(Hospitals dropped were those in zip codes beginning with 700–
708 and 390–397.) The sample is thus representative of nonfederal hospitals nationally excluding these regions. The sample
was stratified by Joint Commission accreditation status,
hospital ownership and staffed bed size, which also yielded
good representation on teaching, urban/rural and multihospital
system status. (The Joint Commission performs voluntary
accreditations for hospitals and other healthcare organisations
across the US, and Joint Commission accreditation has become
a standard for participation in many health-insurance programmes.)
We established indexes as summary measures of hospitals’
performance on the four components identified for an effective
adverse-event-reporting system. Each index was based on data
from relevant survey questions (table 1). For the components on
supportive environment and on reporting by a range of staff, we
established indexes based on two survey questions each. For the
supportive environment component, a hospital was given one
point if it provides for anonymous reporting for all reporters and
one point if it always keeps identity private for reporters who
identify themselves (on three-point scales of all, some, none).
For the index on range of staff reporting, a hospital was given
one point if it reported that at least some of its reports came
from physicians, and one point if it reported that at least some
reports were submitted by technicians, therapists, pharmacy
staff or other staff (on five-point scales of all to none).
Reporting by nurses was not included because survey results
showed that nurses were the predominant reporters for a large
share of the hospitals.
The other two indexes address the distribution and discussion
of summary reports on reported occurrences within the
hospital. The index for timely distribution of reports is based
on responses to three survey questions. A hospital was given
417
Downloaded from qshc.bmj.com on 8 December 2008
Error management
one point if it distributes summary reports within the hospital
(yes/no response), one point if it produces summary reports on
a monthly basis or more frequently (from a four-point scale of
weekly, monthly, quarterly and annually), and one point if
reports are distributed within 2 weeks after the end of the
reporting period (from a five-point scale of less than 1 week to
2 months or more).
The index for senior-level review and discussion of reports
by key hospital departments and committees is based on
responses to two survey questions. A hospital was given one
point if it always provides reports to all of three key
departments: hospital administration, nursing department
and medical administration (five-point scale of always to
never, conditional on having the department). It also was
given one point if it reported that adverse events are discussed
at both the hospital board or board committee and the medical
executive committee (yes/no response, conditional on having
the committee).
Non-response weights were created to realign the sample
characteristics with the target population, and these weights
were used in all the analyses performed. We first calculated
descriptive statistics of the sample characteristics and estimated
distributions of hospitals on the performance indexes. Then, we
performed descriptive analyses for individual components of the
indexes, and we estimated standard logistic regression models to
assess how hospital characteristics were associated with specific
aspects of reporting performance.
Hospital characteristics included in these analyses were
accreditation status, bed size, ownership, teaching, rural
location, existence of a patient-safety programme and status
as a critical access hospital (CAH). Because CAHs differ from
other hospitals by their smaller size and more limited services,
they may differ in their adverse-event-reporting systems and
practices. To qualify for designation as a CAH, a hospital has
to (1) be in a state with a State Flex Program, (2) be in a rural
area or be treated as rural under a special CAH provision, (3)
provide 24-hour emergency care services using either on-site or
on-call staff, (4) provide no more than 25 inpatient beds, (5)
have an average length of stay of 96 h or less, and (6) be either
more than 35 miles from a hospital or another CAH or more
than 15 miles in areas with mountainous terrain or only
secondary roads (other exceptions provided). Risk managers
were asked in the survey if the hospital had in place a
comprehensive patient-safety programme. We did not attempt
to obtain additional detail on the characteristics of these
programmes, because they are complex to profile effectively,19
and the additional survey items required to do so would
increase respondent burden.
RESULTS
Characteristics of hospitals surveyed
Of the 2050 hospitals in the sample, 1652 completed the survey, for
an overall survey response rate of 81%. The characteristics of the
1652 hospitals that completed surveys reflect those of the larger
hospital population, as shown by the small differences between the
unweighted and weighted distributions of hospitals in the sample
(table 2). Therefore, although these weights are used in the
analyses presented here, they have a minor effect on the results.
The survey sample included the full range of hospital types. Of
the total sample, 63% were general medical-surgical hospitals, and
19% were CAHs (320 hospitals). Only 72% of the hospitals in the
sample were Joint Commission-accredited. Of those that were not
accredited (466 hospitals), more than half (57%) were CAHs. The
remaining 43% of the hospitals without accreditation tended to
be rural (50%), small in size (70% have fewer than 75 beds) or
specialty hospitals (32%). According to the Joint Commission
staff, many small, rural hospitals, including CAHs, choose not to
seek accreditation because the accreditation survey process is too
costly, they do not need the competitive edge of accreditation
because there is little competition in rural areas, and they feel the
scope of the Joint Commission standards exceed the range of
services that they provide. For the full survey sample, 87%
reported they have a patient-safety programme.
Types of reporting systems
All but a small percentage of the risk managers reported that
their hospitals had a centralised adverse-event-reporting system
(table 3), with the types of systems differing between the nonCAHs and CAHs. An estimated 75% of the non-CAHs reported
they used both paper and computer systems, and another 14.2%
used computer-only systems, whereas 39.5% of the CAHs
reported using paper-only systems.
We found strong consistency among hospitals regarding
many of the collected data elements. Virtually all the hospitals’
systems had the capability to record type, place and time of
occurrences, and all but a small percentage can document
patient demographics, needed follow-up treatment, action
taken and personnel involved (table 4). However, only 82% of
the hospitals reported that their systems could collect data on
the patient’s condition before and after an occurrence, and only
79% collected data on severity of patient harm.
Features of a well-performing hospital adverse-event-reporting
process
The four performance indexes summarise the current status
of hospitals’ reporting systems. Only small percentages of
Table 1 Composition of hospital reporting performance indexes
Index
Index values*
Survey items in the index
Supportive environment
0, 1, 2
Reporting by a range of staff
0, 1, 2
Timely distribution of reports
0, 1, 2, 3
Review of reports by key departments and committees
0, 1, 2
Provides for anonymous reporting for all reporters
Always keeps identity private for reporters who identify themselves
At least some of its reports came from physicians
At least some reports were submitted by technicians, therapists, pharmacy staff or other staff
Distributes summary reports within the hospital
Produces summary reports on a monthly basis or more frequently
Distributes reports within 2 weeks after the end of the reporting period
Always provides reports to all of hospital administration, nursing department and medical
administration
Discusses adverse events at both the hospital board or board committee and the medical
executive committee
*Default value = 0 for all indexes.
418
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Table 2 Characteristics of the hospitals surveyed
Table 4 Types of data that hospital adverse-eventreporting systems are designed to collect
Percentage
No
Bed size
0–74 beds
75–199 beds
200+ beds
Ownership
Not-for-profit
For-profit
Government
Teaching hospital
JC-accredited
Hospital type
General medical/surgical (non-CAH)
Critical access hospital*
Other{
Unweighted
Weighted
46%
29
24
45%
30
25
959
280
413
385
1186
58
17
25
23
72
57
19
24
24
72
1046
320
286
63
19
17
63
19
18
766
484
402
*Note that of the 320 critical access hospital (CAH) hospitals, 317 are ‘‘General
medical/surgical’’ and three are ‘‘Other.’’
{This group includes psychiatric, rehabilitation, children’s care, other specialty care
(non-CAH), acute long-term care and other types of hospitals.
JC, Joint Commission.
hospitals had the maximum score for each of the four indexes (a
supportive environment, types of staff reporting, timely
reporting and reporting to departments or committees) (figs 1,
2). For the supportive environment and timely reporting
indexes, hospitals were somewhat evenly distributed across
the scores.
For the index on type of staff reporting, 69% of hospitals had
index scores of one point, suggesting that occurrences in their
hospitals were likely to be reported by either physicians or other
staff, but not both. A similar pattern is found for reporting to
high-level departments and committees, indicating that their
occurrence reports were being considered by either internal
departments or committees, but not both. There were 23%
missing data for use of summary reports with hospital
departments and committees, which may indicate that actual
performance is less positive than indicated by the index scores.
Supportive hospital environment for reporting
Risk managers were asked if hospital policy provided for
anonymous reporting or keeping reporter’s identity private if
reported non-anonymously. An estimated 47 (SD 2.4)% of the
hospitals always allow for anonymous reporting, and 29 (2.2)%
never allow for it. An estimated 8 (0.8)% of hospitals overall
never keep reporters’ identities private once identities are
known. CAHs are more likely than other hospitals to keep
reporters’ identities private (fig 3) (x2, p,0.001).
Logistic regression models assessed which hospital characteristics were associated with each of the two supportive
environment components (table 5). A hospital was more likely
Data element
Percentage having the
data element
Type of occurrence
Place of occurrence
Time of occurrence
Patient demographics
If any action was taken
Needed follow-up treatment
Personnel involved
Contributing factors
Needed administrative follow-up
Condition before/after event
Severity of harm to patient
Patient’s medical history
Other information
100
100
99
95
94
94
91
89
85
82
79
58
47
The number of non-responses ranged from a maximum of 15 to a
minimum of 2.
to both allow anonymous reporting and keep reporters’
identities private if it had a computer-only reporting system
or had a patient-safety programme.
Small hospitals, for-profit hospitals and government-owned
hospitals were less likely to always allow anonymous reporting,
but these characteristics did not affect reporters’ privacy
protection. CAHs were more likely to keep identity private,
but this status did not affect hospital policies on anonymous
reporting. Teaching hospitals were more likely to always allow
anonymous reporting, but were less likely to keep reporters’
identities private.
Types of staff reporting adverse events
The risk managers were asked to estimate the shares of staff
who submitted reports to their systems, with responses of all,
most, some, a few or none for each staff type. Almost all risk
managers reported that nursing staff submit all or most
occurrence reports (table 6). Pharmacy staff, technicians and
therapists were identified by more than half the hospitals as
submitting some of the occurrence reports. More than 80% of
the hospitals estimated that attending MDs submit only a few
of the reports.
We estimated a logistic regression model to assess which
hospital characteristics were associated with the extent to
which attending physicians submit reports. The dichotomous
dependent variable for the models was given a value = 1 if a
hospital risk manager responded ‘‘some,’’ ‘‘most’’ or ‘‘all’’ to the
survey question about the share of adverse-event reports
submitted by physicians. We found that attending physicians
at larger hospitals and at hospitals with patient-safety
programmes were more likely to submit occurrence reports to
an adverse-event-reporting system (table 7).
Table 3 Percentage of hospitals that have reporting systems
Questions on systems in place
Non-CAH
CAH
Have a centralised adverse-event-reporting system
For those with reporting systems, type of system
Paper only
Paper and computer
Computer only
98.2 (n = 1329)
(n = 1287)
11.3
74.5
14.2
94.7 (n = 320)
(n = 301)
39.5
56.8
3.7
Response rates by question and hospital type: non-critical access hospital (Non-CAH): 99.8% (1329/1332); 96.6% (1287/1332);
critical access hospital (CAH): 100% (320/320); 94.1% (301/320).
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
419
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Figure 1 Distribution of hospitals for supportive environment for
reporting and types of staff reporting. *n = 1578 with 5 per cent
missing. **n = 1518 with 8 per cent missing.
Figure 2 Distribution of hospitals for timely reporting and reporting to
departments and committees. *n = 1522 with 8 per cent missing.
**n = 1267 with 23 per cent missing.
Distribution of reports regarding adverse events reported into the
system
patient-safety programmes were more likely to discuss adverse
events with these committees, whereas CAHs, teaching
hospitals and hospitals with computer-only reporting systems
were less likely to do so.
Virtually all the hospitals said they produce summary reports
with occurrence data, but only 71 (2.3)% of them distribute
these reports within the hospital (table 8). For those hospitals
that distribute reports, all but a small percentage distribute
them on either a monthly or quarterly basis. The hospitals
varied in how long it took them to produce reports after the end
of a reporting period, ranging from 2 weeks to longer than
1 month. The CAHs differed from the other hospitals, with a
smaller percentage of CAHs reporting they distributed reports
(62% of CAHs vs 73% of other hospitals (p,0.001)).
Discussion of adverse-event reports with key hospital
committees and departments
Risk managers were asked whether adverse events were
discussed in specific committees and the frequency with which
reports were provided to specific hospital departments. Our
analysis focused on reporting to the three key hospital
departments of senior administration, nursing and medical
administration. We also identified two committees as important ones to receive and discuss information about adverse
events—the hospital board or board committee and the medical
executive committee. Another important high-level committee
is the senior-management committee. We did not include this
committee in the index measure because a large percentage of
respondents indicated that they did not have this committee,
resulting in a large amount of missing data for this question.
Our estimates suggest that only 25 (2.2)% of all hospitals
distribute adverse-event reports to all three of the key
departments. In a logistic regression model (not presented),
hospital characteristics explained little of the variance across
hospitals in the likelihood of their distributing occurrence
reports to all three departments (r2 = 0.02). The only significant
findings were that CAHs were significantly less likely to
distribute reports to these departments (OR = 0.64 (0.26)), as
were hospitals with computer-only reporting systems
(OR = 0.56 (0.21)).
Reported adverse events were discussed with both the board
and medical executive committees by 73 (2.3)% of the hospitals
(table 9). Logistic regression results suggest that for-profit
hospitals were more likely than not-for-profit hospitals to
discuss adverse events with both committees, and governmentowned hospitals were less likely to do so. Hospitals with
420
DISCUSSION
As patient safety became a priority for hospitals in many
countries, there was general awareness among providers and
policy makers that hospitals’ adverse-event-reporting activities
needed strengthening, but data were not available to confirm
the need or guide action. These survey results document the
need to strengthen reporting processes in US hospitals and also
highlight priorities for action, and they establish baseline data
for future monitoring of improvement progress.
The large percentage of US hospitals that reported having
centralised adverse-event-reporting systems was a positive
finding, although the nature of their systems varied widely.
Our results suggest that hospitals’ processes for reporting
adverse events and acting upon this information need to be
strengthened. Small percentages of hospitals scored highly on
each of the four performance indexes, indicating that many
hospitals had not established environments that protect privacy
to support reporting, were incomplete in reporting adverse
events, or were not fully distributing and working with
summary reports on events identified in their systems.
These survey results profile strengths and weakness of
existing US hospital adverse-event-reporting systems, highlighting where actions are needed to improve them. A variety
of factors can affect the usability of these systems, however,
which cannot be captured readily in a national survey of this
type. For example, reporting performance could be affected by
the technical integrity of the system, adequacy of staff training
on reporting methods or consistency in employing effective
reporting processes. Additional, more detailed assessments of
hospital reporting systems are advisable, to identify actionable
issues that can be corrected through performance-improvement
interventions.
The results of this national survey raise questions regarding
how the experiences of US hospitals compare with those in
other countries, and how they can learn from each other. The
specifics of the reporting status for US hospitals may differ from
those in other countries. However, many issues likely are
shared—for example, the need for broader reporting by
physicians and variation in dissemination of information on
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Table 6
Types of staff most likely to submit reports of adverse events
Percentage of hospitals
Type of staff
All/most
Some
A few/none
Nursing staff
Other staff
Pharmacy staff
Techs/therapists
MDs in training
Attending MDs
Other practitioners
96
38
8
3
2
1
1
4
17
56
54
15
13
18
,1
45
36
43
83
86
81
The number of non-responses ranged from a maximum of 24 to a minimum of 5.
Figure 3 Distribution of hospitals by policies for anonymous reporting
and keeping reporting person’s identity private. CAH, critical access
hospital.
reported events. In particular, comparisons with experiences of
hospitals in countries with national adverse-event reporting
could reveal possible effects of those external systems on
hospitals’ internal reporting systems.15 20
Our finding of low participation in adverse-event reporting by
physicians also has been found in other studies in the US and
other countries.12 22–25 Reasons identified for physician reluctance to participate in reporting include risk of liability exposure
or professional embarrassment, burdensome reporting methods,
time required for reporting, perceptions of the clinical import
of adverse events and lack of sense of ownership in the
process.9 26–28 Physician participation may be higher than
observed, however, if they are asking other staff (eg, nurses)
to report identified adverse events, rather than doing it
themselves. More work is needed to clarify these issues and
seek solutions to enhance physician reporting.
Other research has found that hospital leaders are concerned
that external adverse-event reporting could increase their legal
liability and increase lawsuits,21 which might also diminish their
commitment to internal reporting. The implementation of
PSOs, under PSQIA provisions, might help to alleviate these
concerns and stimulate internal reporting activities by hospitals.
The wide variation in hospitals’ dissemination of summary
reports generated by adverse-event-reporting systems raises
questions about the effectiveness of follow-up by hospitals on
reported occurrences, especially the finding that almost 30% of
the hospitals that generate summary reports state that they do
not distribute them at all within the hospital. Such issues limit
the information available to hospital decision-makers about
patient-safety issues, which in turn reduces the likelihood that
hospitals will undertake actions to improve practices.
Hospitals with established patient-safety programmes performed better in a variety of aspects of adverse-event-reporting
processes. Our findings are consistent with other research
showing wide variation across hospitals in the adoption of
patient-safety systems.19
The survey finding that many hospitals did not have
supportive reporting environments is consistent with data from
the Hospital Survey of Patient Safety Culture (HSOPS) benchmark database. The ‘‘non-punitive response to error’’ composite
had the lowest average percentage positive response (43% and
44% for 2007 and 2008, respectively) among the hospitals
submitting HSOPS data to the database. This composite
addresses the extent to which staff feel that their mistakes
and event reports are not held against them and that mistakes
are not kept in their personnel file.29 30
As stated above, this paper focuses on the first of two steps
required to reduce adverse events for hospital patients—the
nature and use of hospitals’ internal adverse-event-reporting
Table 5 Factors associated with hospital privacy policies for those who report adverse events
No with affirmative response
(total no in analysis sample)
Bed size
0–74 beds
75–199 beds
200+ beds
CAH status
Rural location
Ownership
Non-profit
For-profit
Government
Teaching hospital
JC-accredited
Computer-only reporting system
Patient-safety programme
Adjusted R2
c Statistic
Hosmer–Lemeshow p value
Always anonymous reporting
Always keep identity private
745 (n = 1560)
897 (n = 1507)
OR (95% CI)
–
1.38
1.35
1.24
1.06
(1.07
(1.00
(0.90
(0.84
–
0.43 (0.33
0.74 (0.59
1.28 (1.01
1.09 (0.83
1.44 (1.09
1.36 (1.03
0.06
0.62
0.012
p Value
to
to
to
to
1.77)
1.83)
1.70)
1.34)
0.013
0.048
0.190
0.602
to
to
to
to
to
to
0.56)
0.93)
1.62)
1.43)
1.92)
1.79)
,0.001
0.010
0.042
0.538
0.011
0.031
OR (95% CI)
–
0.93
0.83
1.60
1.25
(0.72
(0.61
(1.14
(0.99
–
1.18 (0.91
1.11 (0.88
0.77 (0.61
0.84 (0.63
1.83 (1.35
1.33 (1.00
0.06
0.62
0.133
p Value
to
to
to
to
1.20)
1.12)
2.25)
1.59)
0.560
0.219
0.006
0.066
to
to
to
to
to
to
1.53)
1.41)
0.98)
1.11)
2.48)
1.78)
0.224
0.386
0.033
0.220
,0.001
0.049
CAH, critical access hospital; JC, Joint Commission.
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
421
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Table 7 Factors associated with reporting of adverse events by
physicians
Table 8 Distribution of hospitals by dissemination of adverse-event
report information within the hospitals
At least some physicians report
No with affirmative response
(total no in analysis sample)
202 (n = 1448)
OR (95% CI)
Bed size
0–74 beds
75–199 beds
200+ beds
CAH status
Rural location
Ownership
Non-profit
For-profit
Government
Teaching hospital
JC-accredited
Computer-only reporting system
Patient-safety programme
Adjusted R2
c Statistic
Hosmer–Lemeshow p value
–
1.57
2.07
0.59
1.39
(1.07
(1.32
(0.33
(0.99
0.70 (0.47
0.77 (0.53
1.14 (0.82
1.07 (0.69
0.90 (0.61
1.77 (1.06
0.05
0.63
0.322
to
to
to
to
to
to
to
to
to
to
Produce reports of occurrence data
Distribute reports within the hospital
Frequency of reporting
At least monthly
Quarterly
Other frequencies*
Time to produce reports after end of
reporting period
Within 2 weeks
2 weeks to 1 month
longer than a month
p Value
2.31)
3.24)
1.04)
1.95)
1.03)
1.11)
1.59)
1.65)
1.34)
2.93)
0.021
0.001
0.070
0.054
0.072
0.157
0.430
0.776
0.605
0.028
No responding
(response rate)
Percentage of
responses
1596 (97%)
1572 (95%)
1551 (94%)
99%
71
51
37
12
1161 (70%)
43
30
27
*Some hospitals report at multiple frequencies; others at frequencies not listed above.
hospital reporting systems. Due to the effects of hurricane
Katrina, our final sample is representative of hospitals in all of
the United States except southern Louisiana and southern
Mississippi, rather than the entire country. Given the size of the
sample and the consistency of responses, it is not likely that
results would differ for a full national sample.
CAH, critical access hospital; JC, Joint Commission.
systems. The survey also collected data on the types of actions
taken by hospitals in response to reported events (the second
step), which are not reported here. Hospitals varied widely in
the extent to which they used information on reported events
for a variety of actions, for example, analysis of root causes,
training of staff or performance-improvement actions.
Additional work is needed to document the effectiveness of
hospital actions in this phase of the process, and how they are
influenced by the usability of the reporting systems used.
Several study limitations also merit consideration. Because
the survey data are self-reported by hospital risk managers,
often based solely on their perceptions without supporting data,
these results may be optimistic estimates of the performance of
CONCLUSIONS
Findings from this hospital adverse-event-reporting survey
document the current status of reporting systems, and point
to several needed improvements in hospitals’ processes for
reporting and acting upon identified occurrences. These results
provide baseline data for future assessment of trends for changes
in these reporting systems. PSQIA protections for hospitals
reporting to PSOs could encourage such reporting by alleviating
hospitals’ concerns about liability exposure, and could stimulate
improvements in hospitals’ internal reporting systems. Support
of these activities through establishment of other mechanisms
that encourage hospitals to strengthen their reporting systems
also would be useful.
Table 9 Factors associated with discussion of adverse events with the hospital board or committees and
the medical executive committee
Adverse events discussed at both committees
Hospital characteristic
73% (n = 1365)
Percentage that always discuss events
(total no in analysis sample)
OR (95% CI)
Bed size
0–74 beds
75–199 beds
200+ beds
CAH status
Rural location
Ownership
Non-profit
For-profit
Government
Teaching hospital
JC-accredited
Computer-only reporting system
Patient-safety programme
Adjusted R2
c Statistic
Hosmer–Lemeshow p value
–
0.84
0.83
0.60
0.90
p Value
(0.62
(0.58
(0.41
(0.68
to
to
to
to
1.15)
1.20)
0.86)
1.19)
0.284
0.327
0.006
0.468
1.94 (1.37
0.76 (0.58
0.74 (0.56
1.12 (0.81
0.70 (0.51
1.47 (1.07
0.07
0.62
0.631
to
to
to
to
to
to
2.75)
0.99)
0.97)
1.55)
0.97)
2.02)
0.001
0.046
0.032
0.493
0.030
0.017
CAH, critical access hospital; JC, Joint Commission.
422
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
Downloaded from qshc.bmj.com on 8 December 2008
Error management
Acknowledgements: We thank the risk managers at the hospitals in our sample for
their willing participation in the survey. We also thank the staff at RAND Survey
Research Group (SRG) and the University of Illinois Survey Research Laboratory (SRL)
for administering the survey data-collection efforts, under the leadership of C Pham at
RAND SRG, and J Ronco and R Hazen at SRL. The contributions of P Goldschmidt
during preparation for survey administration are appreciated. This study was
conducted with support from the Agency for Healthcare Research and Quality, US
Department of Health and Human Services.
14.
15.
16.
17.
Competing interests: None.
Ethics approval: Ethics approval was provided by the Human Subjects Protection
Committee of the RAND Corporation.
18.
19.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Kohn LT, Corrigan JM, Donaldson MS, eds. To err is human: building a safer health
system. Institute of Medicine, National Academy of Sciences. Washington: National
Academy Press, 2000.
Barach P, Small S. Reporting and preventing medical mishaps: lessons from nonmedical near miss reporting systems. BMJ 2000;320:759–63.
Dixon JF, Wielgosz C, Pires ML. Description and outcomes of a custom Web-based
patient occurrence reporting system developed for Baylor University Medical Center
and other system entities. Proc (Bayl Univ Med Cent) 2002;15:199–202; discussion
208–11.
US Congress. Patient Safety and Quality Improvement Act of 2005, S. 544, enacted
by the 109th Congress. Washington: US Government Printing Office, 2005.
Gandhi TK, Graydon-Baker E, Huber CN, et al. Closing the loop: follow-up and
feedback in a patient safety program. Jt Comm J Qual Patient Saf 2005;31:614–21.
Clarke JR. How a system for reporting medical errors can and cannot improve
patient safety. Am Surg 2006;72:1088–91; discussion 1126–48.
Anderson JG, Ramanujam R, Hensel D, et al. The need for organizational change in
patient safety initiatives. Int J Med Inform 2006;75:809–17.
Reason J. Understanding adverse events: human factors. Qual Health Care 1995;4:80–9.
Spigelman AD, Swan J. Review of the Australian incident monitoring system.
ANZ J Surg 2005;75:657–61.
Tamuz M, Thomas EJ, Franchois KE. Defining and classifying medical error: lessons
for patient safety reporting systems. Qual Saf Health Care 2004;13:8–9.
Martin SK, Etchegaray JM, Simmons D, et al. Development and implementation of
The University of Texas Close Call Reporting System. In: Advances in Patient Safety,
Volume 2.Rockville: Agency for Healthcare Research and Quality, 2005.
Schuerer DJ, Nast PA, Harris CB, et al. A new safety event reporting system
improves physician reporting in the surgical intensive care unit. J Am Coll Surg
2006;202:881–7.
Suresh G, Horbar JD, Plsek P, et al. Voluntary anonymous reporting of medical errors
for neonatal intensive care. Pediatrics 2004;113:1609–18.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Stow J. Using medical-error reporting to drive patient safety efforts. AORN J
2006;84:406–8; 411–14, 417–20; quiz 421–4.
World Alliance for Patient Safety. WHO Draft Guidelines for Adverse Event
Reporting and Learning Systems: from information to action. Geneva: World Health
Organization, 2005.
Runciman WB. Lessons from the Australian Patient Safety Foundation: setting up a
national patient safety surveillance system—is this the right model? Qual Saf Health
Care 2002;11:246–51.
Williams SK, Osborn SS. The development of the National Reporting and Learning
System in England and Wales, 2001–2005. Med J Aust 2006;184(10 Suppl):65–8S.
Ginsberg C, Nieva V, Heller TH, et al. The Adverse Event Reporting Questionnaire:
results of the Risk Manager and Departmental Manager Pilot Study. Rockville: Westat,
2003.
Longo DR, Hewett JE, Ge B, et al. The long road to patient safety: a status report on
patient safety systems. JAMA 2005;14;294:2858–65.
Beckmann U, West LF, Groombridge GJ, et al. The Australian Incident
Monitoring Study in Intensive Care: AIMS-ICU. The development and
evaluation of an incident reporting system in intensive care. Anaesth Intensive Care
1996;24:314–19.
Weissman JS, Annas CL, Epstein AM, et al. Error reporting and disclosure systems:
views from hospital leaders. JAMA 2005;293:1359–66.
Tuttle D, Holloway R, Baird T, et al. Electronic reporting to improve patient safety.
Qual Saf Health Care 2004;13:281–6.
Milch CE, Salem DN, Pauker SG, et al. Voluntary electronic reporting of medical
errors and adverse events. An analysis of 92,547 reports from 26 acute care
hospitals. J Gen Intern Med 2006;21:165–70.
Herdeiro MT, Figueiras A, Polania J, et al. Physicians’ attitudes and adverse drug
reaction reporting: a case-control study in Portugal. Drug Saf 2005;28:825–33.
Madsen MD, Astergaard D, Andersen HB, et al. The attitude of doctors and nurses
towards reporting and handling errors and adverse events. (In Danish.) Ugeskr Laeger
2006;168:4195–200.
Waring JJ. A qualitative study of the intra-hospital variations in incident reporting.
Int J Qual Health Care 2004;16:347–52.
Kaldjian LC, Jones EW, Rosenthal GE, et al. An empirically derived taxonomy of
factors affecting physicians’ willingness to disclose medical errors. J Gen Intern Med
2006;21:942–8.
Schectman JM, Plews-Ogan ML. Physician perception of hospital safety and
barriers to incident reporting. Jt Comm J Qual Patient Saf 2006;32:337–43.
Sorra J, Nieva V, Famolaro T, et al. Hospital Survey on Patient Safety Culture: 2007
Comparative Database Report. AHRQ Publication No. 07-0025, April 2007. Rockville:
Agency for Healthcare Research and Quality. http://www.ahrq.gov/qual/
hospsurveydb/ (accessed 8 Oct 2008).
Sorra J, Famolaro T, Dyer N, et al. Hospital Survey on Patient Safety Culture: 2008
Comparative Database Report. AHRQ Publication No. 08-0039, March 2008. Rockville:
Agency for Healthcare Research and Quality. http://www.ahrq.gov/qual/
hospsurvey08/ (accessed 8 Oct 2008).
Take advantage of BMJ Journals’ remarkable catalogue of titles with Related Collections
No busy professional has time to browse through all pertinent journals to find relevant articles, but with
Related Collections you no longer have to. Follow the ‘‘Related Collections’’ link from any article and use
the ‘‘Show Collections from other Journals’’ to expand your search across all BMJ Journals. Or simply
follow the ‘‘Browse by topic’’ link on the home page. By setting up your own collections and receiving
email alerts every time an article is added to your chosen area, you can build up your own significant
body of knowledge.
Qual Saf Health Care 2008;17:416–423. doi:10.1136/qshc.2007.024638
423
File Type | application/pdf |
File Modified | 2008-12-08 |
File Created | 2008-11-24 |