Attachment I - Development of Two National Provider Surveys

NH Attachment I Development of Two National Provider Surverys 2015 11 10.docx

(CMS–10551) Nursing Home NationalProvider Survey

Attachment I - Development of Two National Provider Surveys

OMB: 0938-1291

Document [docx]
Download: docx | pdf




DEVELOPMENT OF TWO NATIONAL PROVIDER SURVEYS



Erin Taylor

Marge Pearson

Beverly Weidmer

Kanaka Shetty

Cheryl L. Damberg




Contract Number: HHSM-500-2013-13007I

Task Order: HHSM-500-T0002


Prepared for: Health Services Advisory Group, Inc. (HSAG)


Submitted August 26, 2015, to:

Noni Bodkin, Contracting Officer Representative (COR – Task Order)

7500 Security Boulevard

Baltimore, MD 21244-1850

Noni.Bodkin@cms.hhs.gov

Table of Contents



Background/Overview

Development of Two National Provider Surveys describes the creation and testing of two draft survey instruments to collect data about national healthcare providers’ participation in Centers for Medicare & Medicaid Services (CMS) quality reporting programs. CMS is investing significant resources to drive improvements in healthcare quality through the implementation of quality measures—a commitment that underscores the importance of assessing the impact of such programs from the perspectives of the providers whose performance CMS seeks to improve.


Section 3014(b) of the Patient Protection and Affordable Care Act (ACA) of 2010, as amended by section 10304, requires the Secretary of Health and Human Services (HHS) every 3 years to conduct an assessment of the quality and efficiency impact of the use of endorsed measures and to make that assessment available to the public. As part of the 2015 National Impact Assessment of CMS Quality and Efficiency Measures (Impact Report), a multidisciplinary technical expert panel (TEP) proposed five focused research questions about the impact of CMS quality measures on providers:

  1. Is the collection and reporting of performance measure results associated with changes in provider behavior (i.e., what specific changes are providers making in response)?

  2. What factors are associated with changes in performance over time?

  3. Are there unintended consequences associated with implementation of CMS quality measures?

  4. Are there barriers to providers in implementing CMS quality measures?

  5. What characteristics differentiate high- and low-performing providers?


The research team worked with the TEP and a group of federal agency advisors to create two sets of provider surveys to address the research topics. Each survey targets a type of provider participating in CMS measurement programs:

  • Hospitals participating in the Inpatient Quality Reporting (IQR) and Outpatient Quality Reporting (OQR) programs.

  • Nursing homes reporting performance through the Nursing Home Quality Initiative (NHQI) and Nursing Home Compare.


This report describes the development of the two draft survey instruments, which included an environmental scan and literature reviews, formative interviews, and cognitive testing with each type of provider. With approval from the Office of Management and Budget (OMB), the national surveys are to be fielded and results reported as part of the 2018 Impact Report. Addressing the five research questions through these survey instruments will provide information that CMS can use to modify reporting programs and performance measures to better achieve policy goals of providing high-quality, affordable healthcare to CMS beneficiaries.


Methods

Environmental Scan

Survey development began with an environmental scan to identify existing surveys of providers that addressed the same or similar topics as the five impact assessment research questions. A review of published and grey literature also was conducted to inform the construction of survey questions. Targeted searches of Google Scholar and PubMed located prior systematic reviews [1–4] and highly cited publications and technical reports. “Reference mining” (i.e., reviewing reference lists of pertinent papers to locate other articles or key reports) also identified relevant articles. As part of the 2015 National Impact Assessment, a systematic review of studies that examined unintended consequences of performance measurement was conducted; information from this review was used to construct survey items.

Formative Interviews

Nine provider organizations in each of the two settings—hospital and nursing home—participated in formative telephone interviews. Appendixes A and B contain the semi-structured formative interview guides used to gather information for survey development. The formative interviews with providers were designed to:

  • assess whether providers could understand the information the research team sought to collect,

  • explore the language that potential respondents might use to describe the topics, and

  • identify potential response options or areas to probe.


The interviews provided feedback to determine the structure of the survey (e.g., open- or closed-ended questions and potential response options for closed-ended questions) and an approach to identify appropriate respondents to the survey in the provider organizations. The questions were qualitative in nature to allow the survey development team to explore various topics; therefore responses were not suitable for tabulation. Because a small number of providers in each setting participated in the formative interviews, the findings cannot be used for evaluative purposes.


The research team purposefully represented variations in geographic region, size of provider entity, and provider performance on CMS measures (three from the highest quintile; three from the lowest; and three from the middle three quintiles) in the survey sample. Additionally, nursing home providers were sampled on the basis of urban versus rural location and whether a nursing home had a relationship with a hospital (i.e., hospital-based). Table 1 summarizes the sample allocation for each of the two settings.


Table 1. Characteristics of Formative Interview Samples


Setting

Quality Score

High Mid Low

Size

Large Medium Small

Region

Northeast Midwest South West

Hospital

(n=9)

3 3 3

2 3 4

2 2 3 2

Nursing Home1

(n=9)

3 3 3

2 5 2

2 2 3 2

1Additional setting-specific characteristics: 6 urban and 3 rural; 2 located in a hospital and 7 outside a hospital.


The respondents included medical directors, chief medical officers, vice presidents for quality, and nursing home administrators—senior leaders who were responsible for the overall quality and safety of clinical care within the facility. In one hospital and one nursing home, multiple respondents from the same organization participated in the formative interviews.


The interview guides were tailored to each of the two settings to explore provider experiences with these aspects of the CMS performance measurement programs:

  • Changes to improve care delivery: Have the CMS measurement programs led to changes in provider behavior, including both organizational changes and individual clinician changes, to improve the delivery of care?

  • Improvement in measure performance: Have changes in provider behavior been reflected in provider performance on the CMS measures?

  • Drivers of improvement: Which aspects of quality measure programs most drive providers’ efforts to improve the delivery of care?

  • Unintended consequences: Have providers encountered unintended consequences—either positive spillover effects or negative effects—related to participation in performance measurement programs?

  • Barriers to reporting and improvement: Have providers experienced barriers to assessing and reporting on measures or to improving performance on measures?


Respondents also were asked to provide feedback on lessons learned related to the use of performance measures and on any other concerns not covered in the semi-structured interview guide.

Cognitive Testing of Draft Surveys

After formative interviews, closed-ended (i.e., standardized survey) and qualitative interview guides were drafted for each of the two provider types. Using these draft instruments, the research team conducted cognitive interviews to assess respondents’ understanding of the draft survey items and key concepts and to identify problematic terms, items, or response options. Six hospitals and nine nursing homes participated in this first round of testing in July 2014.


Concurrently, the draft instruments were reviewed by the project contractors, RAND and Health Services Advisory Group (HSAG) and the Federal Advisory Steering Committee, consisting of representatives from CMS and other U.S. Department of Health and Human Services (HHS) agencies. Based on the reviewers’ feedback and the findings from this first round of cognitive interviews, the draft surveys were revised.


A second round of cognitive interviews with six more hospitals in August and September 2014 tested a revised version of the hospital survey, yielding results used to further refine the hospital survey. Time constraints prevented a second round of cognitive testing for the nursing home instruments. However, hospitals and nursing homes provided similar comments in response to the draft surveys; therefore, information gathered in the second round of cognitive testing in the hospital setting informed additional revisions to the nursing home surveys.


Recruiting for the cognitive interviews produced a mix of large, medium, and small provider organizations from different geographic regions, including high, medium, and low performers on CMS quality measures, as described in Table 2. The research team sought representatives from each organization who were responsible for or familiar with quality improvement activities within the organization and, in particular, CMS quality measures. Respondents for cognitive testing included medical directors, directors of nursing, and administrators responsible for quality risk, quality and patient safety, performance improvement, and quality outcomes.


Table 2. Characteristics of Cognitive Interview Samples


Setting

Quality Score

High Mid Low

Size

Large Med Small

Region

Northeast Midwest South West

Hospital

Round 1 testing

(n=6)

0 6 0

4 1 1

2 2 0 2

Hospitals

Round 2 testing

(n=6)

1 3 2

2 1 3

0 1 2 3

Nursing Home1

(n=9)

3 3 3

2 5 2

2 3 1 3


Representatives from hospitals and nursing homes were invited via email to participate in the cognitive interviews. An appointment was scheduled for each respondent who was willing and available to complete the survey. Each respondent received a printed survey via common carrier and was asked to complete and return the hard copy prior to the interview appointment.


Each telephone interview was scheduled to last approximately 90 minutes. Using a scripted protocol, an experienced cognitive interviewer reviewed each question with the respondent, then probed to assess the respondent’s understanding of the goal of the question and whether the response options adequately and accurately captured the provider organization’s experience. The interviewer noted survey items or terms that were unclear or not relevant to the organization and sought to determine why the respondent selected particular response options. The interviewer also noted respondent suggestions for clarifying a question. Respondents who completed the cognitive interviews received a check for $300 to compensate them for their time and cooperation.


The interview guide used for both rounds of cognitive testing appears in Appendix C. Because the cognitive interviews were conducted with a small number of providers in each setting, it would not be appropriate to tabulate the responses or to use the findings for evaluative purposes.


The RAND Human Subjects Protection Committee reviewed and approved the interview protocols and instruments for both formative and cognitive testing.

Results

Environmental Scan Findings

Changes to Improve Care Delivery

Organizational Change. The Agency for Healthcare Research and Quality (AHRQ) highlights organizational factors among the central forces driving successful implementation of performance measurement.[5] For example, within the Veterans Health Administration (VHA), organizational change is the strategy cited most frequently by facilities that improved quality scores after implementing a quality measurement initiative. More than half of such facilities had implemented a new clinical procedure (e.g., changing a usual practice for staff).[6] Similarly, a synthesis of 81 studies aimed at improving immunization and cancer screening identified organizational change (e.g., expanded use of non-physician staff and establishment of separate clinics devoted to prevention) to be the most effective program-level intervention.[7]


Staffing Changes. In a case study of a quality improvement (QI) program in a Michigan health system, having support staff dedicated to quality was determined to be integral to the success of a quality measurement program. A quality medical director, a QI specialist, and an experienced data analyst were viewed as particularly critical to quality improvement. The authors of this study also noted the importance of a strong partnership between medical directors and nursing directors for making joint decisions on clinical protocols and relevant operations.[8] Organizations also have made efforts to reduce staff turnover because high rates of staff turnover were linked to lower performance on quality measures in several studies of nursing homes,[9, 10] and interventions aimed at improving staff turnover were linked to improvement on quality measures.[11, 12]


Provider Education. Three systematic reviews found that educating providers regarding a specific change in clinical practice related to QI may improve providers’ performance on the quality measures; the effect of quality measurement itself was not explicitly analyzed.[13-15] However, in three areas of QI focus (asthma, diabetes, and hypertension care), the effect of provider education did not reach statistical significance as a standalone intervention but may increase in combination with other QI interventions.[13-15]


Clinical Decision Support Systems. Electronic health record (EHR) systems remain difficult to implement in many healthcare organizations. However, once an EHR system is implemented, an embedded clinical decision support system (CDSS) can improve outcomes of a performance measurement initiative. A CDSS provides clinicians with recommendations or reminders to promote use of the clinical processes of care that performance measures assess. As summarized in two systematic reviews, use of a CDSS was associated with significantly improved clinical practice compared with quality improvement interventions that lacked the use of a CDDS.[3, 16]


Linking Quality Indicators to Financial Incentives. Payers (i.e., commercial insurers, Medicare, and Medicaid) seek to influence provider and organizational practices by using financial incentives tied to performance. Provider organizations also use internal financial incentives to influence frontline provider behavior. In a systematic review of financial incentives to individual physicians for attaining performance goals, five of six studies found positive or mixed results.[17]

Drivers of Improvement

Linking Quality Indicator Use to Financial Incentives. Linking quality indicator use to performance incentives, such as pay-for-performance (P4P), is one means to increase the likelihood of success of a quality measurement system. In a review of studies examining the impact of financial incentives, physician payment (e.g., bonuses) for providing a specified level of quality of care was found to be effective in general, particularly on improving adherence to processes of care.[18] However, a meta-analysis of 49 studies examining the link between P4P and clinical quality measures found that “the results of the studies were mixed, and studies with stronger methodological designs were less likely to identify significant improvements associated with the P4P programs.”[1]


Public Reporting. Release of quality measure scores to consumers is an effort by payers to drive performance improvement by increasing provider accountability for the quality of care. Two systematic reviews have found little evidence that public reports on provider quality lead patients to choose higher-quality providers.[4, 19, 20] Despite the lack of public awareness, public reporting was associated with increased quality improvement activity in hospitals and may be associated with improved performance on process measures.[19, 20] In the nursing home setting, one study found that consumers were likely to choose nursing homes according to service quality but not clinical quality.[21] In a study of the introduction of nursing home report cards by CMS under the Nursing Home Compare initiative, report cards were observed to have minimal or modest effects on nursing home quality.[22, 23]


Feedback Reports. Feedback reports on performance are another tool that payers and provider organizations use to improve quality. Two systematic reviews of interventions designed to modify provider behavior found inconsistent effects of feedback reporting alone on improving quality.[24, 25]


Regulatory Requirements. Healthcare regulation is a means by which the government or another regulatory entity establishes standards by which healthcare providers must abide. For example, CMS sets regulatory requirements for providers participating in the Medicare program. Professional associations such as the Joint Commission also set standards that organizations seeking accreditation must follow.[26] A study examining interviews with 87 hospital leaders in 12 metropolitan areas found that the majority of hospital patient-safety initiatives were designed to meet the accreditation requirements of the Joint Commission.[26]


Technical Assistance from Quality Improvement Organizations. Quality Improvement Organizations (QIOs) contract with CMS to provide technical assistance (TA) services to providers who care for Medicare beneficiaries. Several studies of the impact of QIOs on QI show mixed findings. In a CMS-funded study of QIOs, physician groups, home health agencies, and nursing homes were assigned to receive typical QIO TA, while others volunteered to participate in a more intensive TA program. The study found that organizations that opted for the more intensive TA program improved more on the quality measures under examination.[27] Another study that examined the impact of differential rates of QIO TA in hospital settings did not find a positive effect on quality outcomes associated with more intensive TA.[28] Furthermore, in a series of interviews conducted among hospital quality management directors, most respondents rated QIO interventions as helpful, but only a quarter of respondents indicated that QIO interventions produced better quality of care than would have been observed absent the interventions.[29]


Barriers to Improvement. Technical and resource requirements for implementation of quality measures have been identified as potential barriers to participation. AHRQ lists data-gathering inefficiencies and other technological barriers as two of the most significant barriers to quality data collection and reporting,[5] and this observation is borne out in empirical research.[30-34] In addition, two stakeholder panels noted that while EHRs are useful for reporting, they are expensive and do not typically provide standardized data needed for constructing and reporting a wide range of performance measures.[1, 34] Nevertheless, ambulatory practices that are able to effectively respond to new quality data-gathering and reporting requirements typically upgrade their information technology systems in response.[32, 35-37] A recent survey of nursing homes in the Midwest revealed extensive use of EHR systems for quality reporting but limited use for other clinical purposes, which could limit positive spillover effects.[38]

Financial costs have been shown across studies to be a barrier to QI measure implementation.[5, 30, 31, 33, 35, 39, 40] In addition, quality measure data collection and reporting can be highly labor-intensive and can stress the capacity of adopters because of the time required to report on indicators, then interpret and act upon the indicator results.[30-32, 39, 41] In two studies of primary care providers, the time required to implement indicators was the most common barrier to successful implementation reported by providers.[32, 41]


Physician practices lacking effective EHR systems, adequate financing, or dedicated staff faced substantial barriers to collecting and reporting data, as well as implementing QI initiatives.[5, 42] Furthermore, provider perceptions play a significant role in the successful implementation of performance indicators. Across several studies, potential adopters’ knowledge regarding the reliability, strength of supporting evidence, and usefulness of the indicators being implemented has been identified as a significant factor influencing the adoption of evidence-based clinical practices and high or improved performance on clinical process measures [30, 32, 35, 39, 41, 43, 44]. Administrative support for indicator initiatives also was a critical factor in facilitating successful quality measure implementation.[31, 39, 41].

Formative Interview Findings

Formative Findings from the Hospital Interviews


Notable Themes. The formative interviews suggested that the nine participating hospitals have actively moved into the QI arena and that they associate these QI activities with CMS measurement programs to varying degrees. Interview subjects reported undertaking numerous changes in care delivery systems to achieve and maintain improved performance on quality measures. The respondents viewed the improvement efforts as having the intended effect on performance scores and reported that their facilities are focused on sustaining these improvement efforts over the long term. Respondents offered examples of resource investments in expanding QI infrastructures, which indicated that these hospitals are incorporating quality measurement and improvement into routine operations.

Changes to Improve Care Delivery. When asked whether, in their experience, the CMS measurement programs for hospital quality led to changes aimed at improving care delivery at their hospital, respondents answered affirmatively. However, one respondent said the measurement programs have led to “a few” changes but have not been a major driver of change. Another respondent stated that while the programs focus attention on improvement, the improvement is narrowly focused on the areas measured. This narrow focus, in turn, may distract attention from the larger overall task of “building a rational high-performing performance management system,” in the words of a respondent.


Respondents from eight of the nine hospitals said that their hospital participated in other quality measure reporting programs besides IQR and OQR. One respondent qualified the discussion on the impact of CMS measurement programs on care delivery changes by saying that his comments would pertain to the aggregate of external measures and measurers, because “we have a hard time separating them out.”


The respondents mentioned a wide range of changes that their hospitals had implemented to improve performance on quality measures. A number of these changes focused on use of a CDS, provider education, standardization of care processes, and patient self-management support materials. Hospitals also reported using physician scorecards and the provision of real-time data to physicians to help manage patient care. One hospital respondent mentioned investing in new equipment as well.


Respondents also discussed efforts to expand their QI infrastructure. Three hospitals reported they had expanded the number of personnel focused on QI. The board of directors of one hospital recently formed a quality committee. This hospital had also created interprofessional bedside improvement teams and engaged clinical champions of improvement. Other respondents mentioned specific clinical review protocols. One hospital focused on a team-based approach in improving care processes. Respondents also indicated they were improving their health information technology (HIT) capabilities to assess and improve quality. Additionally, respondents commented that their hospitals tied financial incentives and evaluations to quality measure scores.


Improvement in Measure Performance. When asked whether these delivery system changes have been reflected in their hospitals’ performance on CMS measures, the nine hospital respondents said that they had seen their scores improve. Examples of measures for which improvement efforts led to better scores included prevention of falls, hospital-acquired infection, and surgical care infection. Two respondents qualified their answers by attributing the improvement in measure performance primarily to better documentation rather than actual changes in the care provided.


Drivers of Improvement. Respondents were asked to discuss the importance of possible drivers of improvement,: public reporting of quality scores, potential for financial incentives, threat of penalties, receipt of feedback reports with quality data, and receipt of technical assistance related to quality improvement. Overall, hospital respondents stated that public reporting and financial incentives were the most important drivers of quality improvement, whereas they considered technical assistance and feedback reports least important.


Unintended Consequences. In response to being asked whether they or their organization had seen any unintended consequences of measurement implementation, seven respondents mentioned one or more unintended consequences that they attributed to measurement-related efforts; two said they were not aware of unintended consequences. One respondent stated that too much time and resources were devoted to data collection at the expense of time and resources for quality improvement. Another respondent commented that healthcare organizations might focus on perfecting their scores for all patients rather than care appropriate for an individual’s clinical circumstance (i.e., a patient may have a contraindication to receiving the recommended clinical process). The same respondent was concerned that the hospital might divert resources from the improvement activities needed to build an efficient, highly reliable system across the board if resources were devoted to achieving 100 percent on selected measures that may not be appropriate and may not benefit patients. Another respondent expressed concern that perfecting scores might divert attention from clinical care, as when a patient is rushed to the catheter lab without a stop in the emergency room for a brief evaluation to identify comorbid conditions.


Respondents mentioned other unintended consequences, including focusing only on documentation changes rather than practice changes, and gaming the system to meet the requirements rather than doing exactly what the right thing is.” One respondent was concerned that the data CMS reported on Hospital Compare could be out of date and thereby result in media attention that confuses patients and injures the hospital’s reputation.


The hospital respondents were asked whether they had experienced any of a number of specific possible unintended consequences. Respondents indicated that improvements did occasionally spill over to other areas. All but two respondents stated that hospitals might pay less attention to areas of care where performance is not measured. Responses about whether measurement programs might create a potential for overtreatment of patients were roughly split. When asked whether staff had modified coding or reporting of the data to improve scores on quality measures, most suggested that hospitals have focused on improving documentation and/or coding but have not tried to misrepresent or deceive. All but one of the respondents said they were not aware of hospitals avoiding sicker or more challenging patients to achieve higher scores.


Barriers to Reporting. Four of the nine hospital respondents said that their hospital was not encountering barriers to reporting measures data. The remaining five respondents said their hospitals had experienced barriers to reporting, noting limited staff resources and information technology capabilities and the need to constantly re-educate staff on the performance measures; further, they said that hospital physicians are not held accountable by the measurement programs in the same way as their hospitals. Other respondents noted reporting challenges resulting from the volume of patients for whom data were required, computer crashes, and errors in the CMS Abstraction and Reporting Tool . Other respondents focused on challenges that arise when new measures are introduced. One pointed to system bugs that remain despite dry runs and changes in specifications along the way, such as adding a diagnosis.


Barriers to Improvement. When asked whether they had experienced any major barriers to improving performance on CMS measures, all but one respondent noted one or more barriers to improvement. Barriers to improvement included competition for resources within the organization; difficulty identifying change and sustainability strategies; inadequate frontline staff with insufficient time to work on improvement efforts; insufficient QI staff; and HIT issues, such as the lack of a fully functioning EHR system or the inability to make needed changes in a timely manner. Physician pushback and lack of staff understanding of the relationship between measures, best practices, and patient care were also mentioned as challenges. Respondents also noted other challenges more directly related to the measurement programs, such as insufficient advance education to report and use new measures and feedback reports with out-of-date data because of the time lag between submission of data to CMS and receipt of CMS reports.


Lessons Learned. When asked what they saw as the most important lessons learned from implementing the CMS hospital IQR and OQR measures, respondents noted that quality of care is important and needs continual focus. In addition to the importance of maintaining constant focus on the measures, one hospital representative emphasized that they had learned that quality performance is the responsibility of everyone who works in the hospital, not just the physicians or nurses. On a different note, another said that the most important lesson learned was that trying to reach 100 percent on some measures is not always the right thing to do for the patient.


Concerns with Quality Measurement Programs and Suggested Changes. In sharing ideas for measurement improvement, respondents suggested that measurement programs need to increase “nimbleness” and timeliness in measure construction, vetting, and dissemination. Respondents also recommended revising measures that are not clinically important to improving patient outcomes. Additional areas of concern included the alignment of measures and CMS collaboration with hospitals and physicians on measurement programs. One respondent said that better alignment on measurement among such entities as CMS, the Joint Commission, and the National Healthcare Safety Network would help hospitals. Two other respondents emphasized the need for CMS to solicit input earlier from hospitals when making changes to measurement programs—before “it’s a done deal”—and for CMS then to utilize that input in measurement decisions.

Formative Findings from the Nursing Home Interviews

Notable Themes. The interview responses suggest that the nine nursing homes in the sample are responding to quality measure programs by implementing improvements to care delivery. However, these improvements appear to focus primarily on individual patient needs and targeted responses to specific adverse events that had resulted in poor quality scores rather than widespread, overarching changes in care processes. Nursing homes are addressing problems as they occur, using such tools as root cause analysis, but this reactive approach does not necessarily result in overall changes to the delivery of care in the nursing home.


In addition, respondents focused on the issue of regulatory compliance, including the nursing home survey process, as a key driver of their improvement processes. Respondents viewed quality measures as being closely related to these surveys, not only because the results of inspections (especially identification of compliance issues and other problems within the nursing home) appear on Nursing Home Compare, but also because the surveyors use the quality measures as a source of information when inspecting nursing homes. Therefore, the two levers (surveys and quality measures) are inextricably linked in the minds of the respondents.


Finally, respondents identified important distinctions between short-stay and long-stay residents regarding quality measures and the improvements associated with those measures. Respondents raised concerns that quality measures for short-stay residents are more difficult for the nursing home to influence, as a patient may experience an event that triggers a quality measure (for example, a fall), but any changes the nursing home implements will not affect that resident.


Changes to Improve Care Delivery. Respondents indicated that their nursing homes had implemented changes in response to the quality measurement programs implemented by CMS. However, two respondents said that the QI processes that are ongoing in their nursing homes would have been in place even in the absence of CMS quality measurement programs. One of these respondents noted that his nursing home, which is part of a large chain of nursing homes, has an advanced EHR system that provides real-time information on quality performance. This system, combined with an interdisciplinary team that focuses on identifying the trigger for a measure, enables this nursing home to address quality-of-care issues much faster than the CMS programs would otherwise allow. The second respondent noted that her nursing home already had improvement processes in place before the CMS programs began but added that the CMS feedback reports aided improvement efforts by revisiting the patients who “triggered” the numerator for a given measure to determine whether the intervention implemented in response was effective.


Respondents provided a range of examples of process and policy changes that their nursing homes had implemented in response to the quality measurement programs. These efforts included staffing changes to increase continuity of caregivers for residents, revised care plans, staff education, and resident education. Nursing homes also created improvement teams, which often targeted specific measures in their efforts to improve quality. Specific approaches mentioned included inter-professional rounds, monthly quality analyses, and root cause analysis in response to specific events such as falls. Nursing home respondents noted that they use the quality measure reports to assess whether interventions arising from the root cause analysis do in fact improve care for those persons who trigger the measure.


Nursing homes described a number of measures that had specifically been targeted for improvement. Nearly all of the nine nursing homes in the sample mentioned antipsychotic use as a key focus because of specific CMS initiatives designed to reduce the number of residents being prescribed antipsychotics. “High-risk litigation areas” were also highlighted as targets for improvement efforts. The two specific measures mentioned in this category were falls and wound care, including pressure ulcers. Other measures mentioned included urinary tract infections and catheter use, pain, and the influenza vaccine.


Improvement in Measure Performance. Nursing home respondents generally agreed that their scores on quality measures had improved over time because of changes they implemented in response to the quality measurement program. One nursing home respondent said that the quality measurement program helped the facility focus on specific areas that needed improvement, wound care and antipsychotics being examples of areas in which there were specific changes in response to the measure programs. Another respondent noted that nursing homes had to learn how to respond to the quality scores; one change that improved scores was working collaboratively with physicians to reduce the use of antipsychotic medications. One nursing home respondent highlighted that, as might be expected with the quality measure program, care delivery and survey outcomes related to state inspections had improved concurrently with the quality scores.


Drivers of Improvement. Respondents considered the public reporting, financial incentives, and regulatory compliance aspects of the quality measure programs to be the most important of the five drivers. Public reporting was viewed as an important means by which nursing homes could entice consumers—potential residents and family members—to visit their facilities and to use the Star Rating system to make their nursing home decisions. Financial incentives, often along with penalties, were commonly named as important drivers of improvement. One respondent mentioned that hospitals were examining nursing home performance measures and using these measures to steer patients to nursing homes with better (i.e., lower) hospital readmission rates so as to avoid the penalty. The least important drivers were technical assistance and feedback reports.


As noted, respondents reported that the need to meet state and federal requirements and to avoid a citation created much of the incentive to improve care delivery and to prepare quality measures for such visits. One respondent specifically said that the quality measures helped his nursing home prepare for the survey by enabling facility leadership to anticipate the focus of the survey.


Interviewees also were asked whether other drivers of improvement were important to their particular nursing home. When respondents did describe other drivers, these included adverse risk reduction, resident and family satisfaction (as measured by surveys), competition with neighboring facilities, and preparation for the upcoming CMS Quality Assurance and Performance Improvement (QAPI) initiatives.1 For the QAPI initiatives, respondents were not able to provide details of upcoming changes to the delivery of care, because plans were being made at the corporate (or higher) levels and had not yet been released to individual nursing homes.


Unintended Consequences. When asked about whether the quality measures were associated with any unintended consequences, respondents mentioned few unintended effects without being prompted by the interviewer. Nursing home respondents were asked whether they were aware of overtreatment associated with measurement on the quality indicators. When respondents had heard of such occurrences, the treatment decision was attributed to the doctor, meaning that the nursing home had little to no control over the actions of physicians and treatment decisions. In response to a question about whether a focus on selected quality measures resulted in improvements in other areas—also referred to as “spillover effects”—respondents commented that these effects were highly likely but gave few examples. Feedback was mixed regarding whether quality measures resulted in neglect of areas that the quality program did not measure.


When asked about coding modifications, respondents did not describe their actions as undesired behavior; rather, changes made to coding were attributed to the learning process needed to interpret the measure, corrections of incorrect coding, or the fact that a given company was simply good at reporting. One respondent did suggest that it was possible to “game the system” through coding by changing the timing of the data entry and the validation of the data. Such gaming might be more likely to occur with relatively subjective measures, which might make it easier to alter the documentation of that measure; examples of more subjective measures were activities of daily living (ADLs) and pain.


Finally, when asked whether nursing homes might avoid sicker patients to score better on quality measures, respondents said they had heard that other nursing homes do avoid sicker patients, but often clarified that their nursing home does not do this. Two respondents said that they did not believe such actions resulted from quality measures, but from other factors, such as finances, specialization, and staffing levels and expertise.


Barriers to Reporting. Nursing home respondents reported no barriers to reporting the quality measures, indicating that their HIT systems were sufficient to handle the reporting requirements. One respondent, however, highlighted problems with CMS training on the measures, saying that the measures can be subject to interpretation.


Barriers to Improvement. Nursing home respondents mentioned few challenges to improvement in CMS performance measurement programs. When mentioned, the challenges included financial constraints, staff turnover, lack of physician buy-in on measures, the large number of measures, need for more training or education, and challenges specific to individual measures. One respondent noted that finding the correct solution to a problem can be difficult. In terms of measure-specific challenges, respondents highlighted falls in particular. They expressed concern not only that the falls measure runs directly counter to the measure to limit restraints measure, but also that what is viewed as a fall and what is reported as a fall may differ among nursing home facilities.


Lessons Learned. When asked about the lessons learned as a result of the quality measure programs, nursing home respondents listed several. The respondents viewed the quality measure program positively, stating that transparency is very important. Respondents said that by providing focused information and allowing comparisons with other organizations, the programs aid nursing homes in identifying areas for improvement. One respondent commented that the lesson learned was that “achieving quality is a cyclical process,” in that quality improvement must occur over time and in response to certain events. The same respondent noted that a multidisciplinary approach is needed in order to achieve quality.


Concerns with Quality Measurement Programs and Suggested Changes. The main concern with quality measurement programs was the lag time in the reporting of quality measures, particularly on Nursing Home Compare. The lag means that an event which had occurred several months earlier, and in response to which the nursing home had implemented changes or improvements, would still appear on the Nursing Home Compare website, possibly deterring potential residents and family members from choosing the nursing home. The reporting system itself was viewed as rough and lacking accuracy; respondents commented that the complex nature of a nursing home is very difficult to summarize in a five-star rating. Another concern voiced by one respondent was that the quality measures draw focus away from the patient and onto the measures themselves.


Respondents wanted more real-time data collection and reporting. They generally expressed the belief that delays in reporting made the information less helpful to the nursing home and to the public. A second change sought by respondents involves the distinction between short-stay and long-stay residents. One respondent said that the differences between these two populations can be substantial and that the denominator differences across facilities (i.e., the total number of short-stay and/or long-stay patients) could have substantial effects on the quality score for a nursing home. Finally, respondents requested that the reports on Nursing Home Compare provide more context for decision-makers viewing the information. Respondents specifically requested that more information be presented about the facility, particularly the patient population.

Cognitive Findings from Hospital and Nursing Home Interviews

Overarching Issues


Ability to Complete the Survey. Findings from the cognitive interviews demonstrate that the respondents had the knowledge needed to complete the survey. However, occasionally the designated respondents stated that they needed to consult with other members of the provider organization, to look up information, or to consult the documentation that had been previously submitted to CMS, as well as the organization’s scores on CMS quality measures in previous years.


The topics the survey covered resonated with respondents and included important issues. Survey items were meaningful and relevant, as evidenced by the examples and additional information respondents provided when asked to elaborate on responses to individual survey items. Closed-ended response options were understandable and captured the range of experiences and responses provided by the respondents.


Length of the Survey. The invitation for participation in the cognitive interviews included an estimate of 30 to 40 minutes to complete the draft survey prior to the telephone interview. However, the first version of the survey that was tested, containing 78 questions, was reported to take at least twice that long. Although participating hospitals were cooperative, they noted the burden involved in completing the survey. Based on this feedback, the research team shortened the survey by deleting items that were regarded as redundant, problematic, or not relevant to a range of respondents. The final version of the hospital survey includes 48 questions; the nursing home survey includes 47 questions.


Need for Definitions of Key Terms. Findings from the cognitive interviews showed that not all respondents were completely familiar with all of the CMS quality measures. In fact, respondents recommended including definitions for the measures described in the survey. In response, the survey was revised to include a “Definition of Key Terms in the Survey,” which included definitions of the CMS quality measures and hyperlinks to the areas of the CMS.gov website that described the measures in detail. Definitions were included for other terms used in the survey (e.g., “learning organization” and “culture of safety”) to achieve a common understanding among respondents.


Need for More Specificity. Several survey items included in the first version of the survey were described by respondents as overly vague. In response, each item in the survey was reviewed and edited. For example, the first version included an item that asked whether the organization had implemented a “health information exchange” without specifying whether this question referred to an electronic

information exchange or simply an exchange of information with providers in the community. In response, the survey item was revised to include the word “electronic.”


Redundancy. Feedback obtained from the first round of cognitive interviews suggested redundancy of items across sections of the survey (e.g., questions about the use of an EHR were included in different parts of the survey, as were questions about performance measure feedback and incentives provided to clinicians and other staff). In response, the research team deleted items to eliminate redundant content.

Detailed Findings by Survey Topic


Organization’s Experience with CMS Measures. Cognitive interview respondents did not have difficulty completing this section of the survey. The options provided captured the range of responses and were sufficiently specific. However, respondents requested examples or clarification regarding the item that asked about “difficulty in improving on certain types of measures.” In response, the question was edited to provide examples (e.g., for clinical process measures, STK-4 Thrombolytic was provided as an example). In response to feedback from reviewers, two survey items were added that ask providers to indicate whether CMS measures are clinically important and whether organizations should be held accountable for performance on these measures.


One recurring theme among participants was that improvement on CMS measures requires engagement of frontline physicians and medical staff, who are continually being asked to do more in less time and with fewer resources. Hospital representatives reported that hospitals tend to do well on outcome measures but not so well on process measures. Respondents explained that the documentation that they rely on to provide information on process measures is either not precise enough or too prescriptive, and so it does not make sense to the staff completing the documentation. Nursing home participants expressed frustration with the CMS measures and stated that the documentation requirements can make it seem as if they are not providing the best care possible when, in their opinion, they are.


A recurring theme among participants from high-performing hospitals was that it is difficult for facilities to improve on a measure if they are already scoring close to 100 percent. Nursing home participants reported that sustained improvement requires continual re-education, monitoring and review; they reported that when they focus on one particular measure, they decline on others. Participants also commented that as other organizations improve, “your percentage goes down, even though your performance is stable.” Finally, participants stated that CMS should not have the same requirements for small community hospitals as for large hospitals and organizations that are part of an integrated health system. They commented that CMS should take into account available resources and patient population when looking at performance on the measures, particularly if organizations are going to be held responsible for performance on these measures and if payment is to be based on performance.


Participants reported experiencing difficulty improving performance on patient satisfaction and patient outcomes (particularly mortality measures) and attributed this to the types of patients their organizations serve (e.g., patients with low literacy who are unlikely to complete patient satisfaction surveys, the elderly, and patients in an extreme state of illness with little chance of recovery and not enough time to be placed in hospice care). Hospitals and nursing homes that are not fully integrated as a system of care also reported difficulty in improving resource measures that assess care delivered across the continuum of care.


Innovations in the Delivery of Care. Cognitive interview respondents did not have difficulty completing this section of the survey. Individual survey items were meaningful to participants, who were able to provide appropriate examples to support their response when probed. One item that asked about purchase of equipment to support care related to specific CMS measures was described as difficult to answer because organizations do not typically purchase equipment specifically to improve on CMS measures. For this reason, the item was dropped. Other edits were made to improve specificity, either by revising wording, or providing examples. For example, and item that asked about “condition-specific assessment protocols” was revised to ask about “risk prediction tools to identify and manage high risk patients”. In addition, 3 new items that gather information on the culture of the organization were added based on feedback received from reviewers. Participants reported that the changes their organization has made in response to the CMS measures have led to improvements in other areas not measured by CMS. For example, changes made to decrease hospital-acquired infections have led to a decrease in length of stay in the hospital. Improvements in certain process measures were also reported to have improved performance on patient flow, discharge times, and “door to floor” time in the Emergency Department. Nursing home participants also reported some spillover effects, for example efforts to reduce falls have also improved other patient outcomes (e.g. mortality).


Challenges to Reporting the CMS Measures. Respondents did not have difficulty completing this section of the survey; however some of the response options in this section were revised to provide more specificity (e.g., “difficulty capturing data” was revised to read “difficulty capturing data for measure construction.” Organizations that reported difficulty in reporting CMS measures attributed the challenges primarily to difficulty interpreting or translating measure specifications and capturing the data needed for measure construction. This was particularly problematic with measures for which clinicians are asked to document what they did not do (clinicians are trained to document what they did, so this is counter-intuitive). Participants also reported difficulty in capturing data they need (for example, denominator data for infections), in extracting or abstracting the information they need in a format that is easy to upload, and in extracting data they need to report from their EHR (some EHRs don’t have a search function to facilitate populating measures). Finally, smaller organizations reported inadequate or insufficient staff that can be tasked with gathering documentation required to produce the CMS measures.


Factors Associated with Changes in Quality Performance. This section of the survey included two questions that required respondents to rank order factors that were most important in their organizations’ decision to invest in quality improvement efforts for CMS measures. A second question in this section asked participants to rank order the top three factors that have helped their organization improve performance on all or some of the CMS measures. Participants had no difficulty with the rank-ordering task.


Based on feedback from reviewers, the research team added an item on investments in patient safety to the question about factors that have helped the organization improve performance. A third question in this section asked about factors that have contributed to difficulties in improving performance. Several of the items in this question were revised to provide more specificity. For example, “insufficient resources” was revised to “insufficient resources (e.g. staffing, tools, training)” and “inadequate health IT capabilities” was revised to read “inadequate health information technology resources or capabilities (e.g., clinical decision support or longitudinal tracking of outcomes).” Also, based on feedback from reviewers, several response items were added to this question (i.e., difficulty with coding or documentation, a challenging or complex patient population, and a culture that does not support improvement).


The potential to receive financial incentives and public reporting were the most frequently cited reasons for investing in quality improvement efforts, while threat of financial penalties and participation in alternative payment models (bundled payment arrangements or accountable care organizations [ACOs]) were ranked lower. The rank ordering of factors that are the most important in improving performance varied across respondents, but hospital leadership, a culture of quality, and strong data systems were cited among the top three factors.


Undesired Effects of CMS Quality Measurement Programs. During the cognitive testing, the research team wanted to determine whether respondents would be willing to report on undesired effects of the CMS quality measurement programs as part of a closed-ended survey. This section of the survey specifically asked whether, as a result of being held accountable for performance on CMS measures, the organization had seen:

  • Allocation of fewer resources for quality improvement in areas of clinical care that are not the focus of CMS performance measures;

  • A focus on narrow improvement for specific measures rather than across-the-board improvement in care;

  • Overtreatment of patients to ensure that a measure objective is met;

  • Increased focus on documentation or coding of data to attain a higher score;

  • Changes in coding of data or documentation to ensure that a measure objective is met; or

  • Avoidance of sicker or more challenging patients when providing care.


Participants did not have difficulty in answering the questions included in this section of the survey; they were candid in reporting several of the specified unintended and undesired effects. For example, participants reported that the focus on CMS measures and their use in public reporting and for payment/value-based purchasing leads organizations to have a more narrow focus for quality improvement efforts. In addition, participants reported that because organizations have to focus on the CMS measures, they cannot focus on other, unmeasured clinical areas or dedicate resources to other areas that also need improvement. Participants in both hospital and nursing home interviews reported that in some cases, they had experienced overtreatment of patients. For example, one participant reported overzealous administration of antibiotic for pneumonia; another cited the use of compression boots after surgery to meet measures for prophylaxis of pulmonary thromboembolism.


Participants also reported an increased focus on documentation, particularly among clinicians, at the expense of other considerations, including patient care. In addition, respondents reported an increased vigilance about the coding process because staff is increasingly aware of the importance of claims data for CMS measures.


Because these items may be perceived by hospitals and nursing homes as sensitive and to motivate future participants to answer as candidly as possible, the following statement was added to the preamble to this section of the survey: “All of the responses you provide are confidential and are intended to help CMS in modifying reporting programs so as to avoid causing undesired effects.” In addition, the wording of response options was revised; specifically, “changes in charting or coding of record data to promote a better score on CMS performance measures” was broken into two items: “Increased focus on documentation or coding of data to attain a higher score” and “Changing coding of data or documentation to ensure that a measure is met.”


Perspectives of Hospital Leadership and Other Stakeholders. Participants did not encounter problems completing this section of the survey. Based on feedback received from reviewers, lower-priority items (e.g., “Is there a representative for quality initiatives on the board?”) were dropped in favor of adding items that ask hospitals and nursing homes to rate their leadership efforts to promote a culture of quality and to rate both leadership and physician support for quality initiatives related to CMS measures.


Participants reported that their organization boards and senior leadership regularly review and discuss organization performance on the CMS measures and that hospital leadership is equally engaged in financial performance and quality performance issues. In addition, participants described their organization leadership (i.e., board, corporate suite, chief operating officer, medical director, and the director of nursing) as being supportive of organizational efforts to improve performance on CMS measures and to promote a culture of quality. In contrast, participants reported less support among physicians for improvement on CMS measures. A recurring theme was that it was difficult to engage physicians in quality improvement efforts related to CMS measures and that organizations often lack ways of encouraging or incentivizing physicians to engage in quality improvement efforts because physicians are not directly employed by the organizations or because there is no parallel physician accountability for performance on current measures.


Use of Health Information Technology. Participants reported their access to and use of HIT and specifically whether they had an EHR system and whether they could use EHRs to gather the documentation needed for CMS measures. A recurring theme in this section of the survey related to problems, challenges, and frustrations with EHRs. Participants reported that they were unable to exchange information electronically with all departments or to exchange information electronically with other healthcare providers in the community. Participants reported that they do not use EHRs to report CMS measures.


Participants were generally able to answer the questions in this section of the survey but suggested revisions for clarifying the intent of the survey questions. For example, the item that asks whether “the (organization’s) EHR is able to electronically exchange information with any of the following providers in your community” was revised to read “Are health providers in your community (i.e., ambulatory care physicians, nursing homes) able to access your hospital’s EHR or health information system to obtain key clinical data on patients?” The response options to this question allow participants to indicate whether they can access all key clinical data or only limited data. Based on feedback from reviewers, questions were added including the types of clinical data community that providers can access, whether the provider organization can access clinical information from other health providers in the community, and whether the organization uses other electronic tools apart from their EHR system for collecting and reporting on CMS measures.


Respondent Characteristics. Participants did not demonstrate difficulty in answering the questions in this section of the survey. In an effort to reduce the length of the survey, low-priority questions were combined and/or dropped (i.e., the participant’s level of education). A question that asks about the position or title of other people who helped complete the survey was added.

Appendix A: Formative Interview Guide for Hospitals


Respondent Type

Organization Name:

Respondent Name:

Respondent Position:

Interviewer Name:

Interview Date:


INTRODUCTION AND PURPOSE OF THE INTERVIEW

Before we get started, I’d like to briefly review the purpose of this interview and the confidentiality provisions that were described in detail in the email we sent you.


  • As you know, the Centers for Medicare & Medicaid Services (CMS) use a number of quality measures to assess the quality and efficiency of the care provided to Medicare beneficiaries. For example, CMS uses the Hospital Inpatient Quality Reporting (IQR) measures set to collect data from hospitals for multiple programs, including Hospital Compare and the hospital value-based purchasing program. Similarly, it asks hospitals to report on Outpatient Quality Reporting (OQR) measures. RAND has been asked by CMS and the Health Services Advisory Group (HSAG) to help assess the impact of using these measures on the organizations implementing them and the care they provide.


  • We’ve come to you to help us understand how hospitals have experienced the CMS quality measures. Your insights will help us develop a survey that we will conduct in the future with a large group of hospitals across the country.


  • We would like to ask you about the impact of these measures on the delivery of care, any unintended consequences that may have resulted, and barriers your hospital has encountered in quality reporting and making improvements on these measures, but first we’d like to review the confidentiality provisions for this interview.


CONSENT

  • All of your responses are confidential.

  • No one outside of the research project will have direct access to the information you provide. The evaluation team will only produce summary information from our collective set of interviews. You will not be identified by name or hospital affiliation.

  • You do not have to participate in the interview, and you can stop at any time for any reason.

  • You should feel free to decline to discuss any topic that we raise.


Do you have any questions? (Yes/No)


Do you agree to participate in the interview? (Yes/No)


As we mentioned in our email, we would like to tape the interview if that is all right with you.


Do you agree to being tape recorded? (Yes/No)


If yes: Great. Let’s get started. I’ll start the recording.


If no: That’s fine. We will take notes – and not tape the discussion. Let’s get started.


Interview…


We’d first like to ask you a couple of questions about your position and professional background.


Respondent Background

  1. We understand that you are the _________ [title/position] in ________ [hospital name]. Is that correct?


  1. What is your professional background? [If physician;] Are you a primary care physician or some other kind of specialist? [If other specialist:] What is your specialty?


  1. How long have you been working at ________ [hospital name]?


  1. How long have you been the _________ [title/position]?


CMS Quality Measures – General

As already mentioned and I’m sure you know, CMS asks hospitals to report data for a number of measures – often referred to as Inpatient Quality Reporting (IQR) measures and Outpatient Quality Reporting (OQR) measures. Data on these measures are made available to the public through the Hospital Compare website and are used in the hospital value-based purchasing program. [Examples include: pneumonia patients given initial antibiotic(s) within 6 hours after arrival, pressure ulcer stages III and IV, and central-line associated bloodstream infection.]


  1. What is your role with respect to reporting and improving performance on CMS quality measures here at _____ [hospital name]?


We’ve sent you the full list of measures under discussion, and we’d like to ask you to think about these measures—and how they’ve affected the quality and efficiency of care at _____ [hospital name]. Let’s start with innovations or changes in the way care is delivered.


Innovations in Delivery of Care

  1. In your experience, have the CMS measurement programs for hospital quality led to changes to improve the delivery of care at ______ [hospital name]?


  1. [If no:] Why is that? [Possible prompts: Improvement has not been needed? Lack of resources?]


  1. [If yes:] Let’s talk a little more about the changes in the delivery of care. What kinds of changes has _____ [hospital name] made to improve performance on the CMS quality measures? Would you give us a couple of examples?


  1. I’m going to mention five specific, possible drivers of improvement and ask you to discuss the importance of each as a driver of improvement in your experience. The possible drivers include (1) public reporting of quality scores, (2) the potential for financial incentives, (3) the threat of penalties, (4) receipt of feedback reports with quality and efficiency data, and (5) receipt of technical assistance related to quality improvement.


How important is public reporting of quality scores as a driver of improvement?


How important is the potential for financial incentives as an improvement driver?


How important is the threat of penalties as such driver?


How important is the receipt of feedback reports with quality and efficiency data?


How important is the receipt of technical assistance?


  1. Which of these drivers – public reporting of quality scores, financial incentives, penalties, feedback reports, or technical assistance – would you say is most important? Which is least important?


  1. For the national survey we’re developing, we are considering a question that asks the respondent to rank the relative importance of each of these drivers in motivating improvement efforts (from most important to least important). Would you find this possible to do?


  1. Are there other drivers of improvement that are important in your hospital? [Possible prompts: Risk reduction? Corporate pressure or policy? Accreditation? Working to stay competitive?]


  1. Has your hospital made broad organizational changes to expand its ability to provide quality care and perform well on quality measures? Please give some examples. [Possible prompts: own internal incentive program, provider feedback reports, care coordination innovations, enhanced information technology, attempts to improve documentation of existing care] [If training is mentioned, probe if focused on quality measures.]


  1. [If respondent had difficulty understanding what we meant by “broad organizational change”:] I just used the term “organizational changes.” What does that make you think of? Would you suggest that we use a different term in the survey?


  1. Have individual clinicians made any changes in response to these measures? If yes, tell us what they’ve done.


  1. Given the investments you described, do you believe these have impacted your performance on the CMS quality measures? If so, in what ways? Please elaborate.


  1. Which efforts in particular have been associated with changes in performance over time?


Unintended Consequences

We’ll turn now to some questions on other possible effects the CMS measurement programs may have had. Again, let me assure you that your responses are strictly confidential.


  1. Have you or your organization seen any unintended consequences—either negative or positive consequences—resulting from CMS’ quality measurement, reporting, and value-based purchasing efforts? Please describe.


  1. If so, are they related to certain measures in particular? Which?


  1. We’ve heard some reports that improved performance on some measures have at times spilled over to generate improvements on other clinical areas that are not part of what is measured or financially incentivized by Medicare or other payers—resulting in improvements in quality across the board. Do you think this happens at _____ [hospital name]? Would you give us an example? [Example, if needed: For example, CMS measures you on heart attack, pneumonia, and heart failure care, but delivering better quality care in these areas might positively affect care for patients undergoing hip replacement.]


  1. On the other hand, hospitals might focus all their improvement efforts on areas of care where performance is being measured or financially incentivized and ignore or pay less attention to areas of care that are not measured. Do you think this happens? [If yes:] How does this happen? Does it happen with any specific measures in particular?


  1. [If not mentioned above:] Do you think people have modified their coding or reporting of the data to score better on quality measures? [If yes:] For any measures, in particular?


  1. Have you heard of hospitals avoiding sicker or more challenging patients when providing care in order to achieve higher scores on quality measures? [If yes:] Would you give us an example of the kind of scenario you’ve heard of? (You don’t need to mention any names.)


  1. We’ve also heard concerns that measurement programs may create a potential for over-treatment of patients—say, for example, if providers provide antibiotics in the ED to persons with low probability of pneumonia in order to ensure the rapid administration metric is met. Do you think this happens? With any specific measures, in particular? Do you have any examples?


  1. [If some unintended consequences have been mentioned:] Why do you think these unintended consequences have occurred? [Possible prompts: poor measure design, large financial incentives, difficult patients, other.]


  1. Are they related to certain measures in particular? If so, which?


Barriers to Implementation

We’d now like to talk about two types of barriers that might arise—first, barriers around the reporting of data and, second, barriers to improving performance on quality measures.


  1. Have you encountered any major barriers to reporting ____ [hospital name]’s performance on the IQR and OQR quality measures? Please describe. With any measures in particular? [Prompts: Inadequate IT capabilities, provider training, difficulty reporting scores, insufficient resources]


  1. Have you experienced any major barriers to improving ____ [hospital name]’s performance on CMS quality measures? Please describe. Any measures in particular? [Prompts: Difficulty identifying appropriate improvement strategies, provider training, insufficient resources, inadequate IT capabilities, staff turnover, lack of sufficient support or time from physicians]


Hospital Reporting Background

  1. Has _____ [hospital name] participated in any other quality measure reporting or pay for performance programs? Please specify. [Prompts: Private sector programs, Medicaid]


  1. [If so:] Have some of the various quality measure reporting or P4P programs had greater effect on the quality and efficiency of care at _____ [hospital name] than others? Which ones? Why do you think this is the case?


Identification of Survey Respondents

  1. As we mentioned at the beginning of the interview, we plan to conduct a large national survey of hospital providers on their experiences with CMS quality measures. The types of questions we would be asking are similar to the topics we’ve discussed today. In an organization such as yours, who would you say is the most appropriate person to direct the survey to? Would more than one person need to provide the information to fully complete a survey (so bringing expertise from different departments who may be involved)?


Lessons Learned

  1. What have been the most important lessons learned to date from implementing the CMS hospital IQR and OQR measures?


  1. Have these lessons led to any changes in the way things are done at _____ [hospital name]?


  1. Do you have any experiences or concerns around CMS measurement programs that you would like to raise that we haven’t discussed?


Additional Questions if Time Allows

Re: Unintended Consequences:


  1. [If no to Q14 – have not encountered any unintended consequences:] Have you had concerns that some negative consequences might occur? [If yes:] What concerns have you had?


  1. Have other clinicians or administrators at _____ [hospital name] raised concerns about possible negative consequences of these quality measures? Would you describe these concerns?


  1. Has _____ [hospital name] modified any reporting procedures in response to unintended consequences?


Re: Barriers to Implementation:


  1. What actions have been taken to address or reduce the barriers you mentioned around reporting data or improving performance on quality measures?


Re: Innovations in Delivery of Care:


  1. How do you think your staff understands the CMS measure program and how it works?


  1. Would you tell us a little about how changes to improve the delivery of care are initiated and undertaken?


  1. Does _____ [hospital name] have a quality performance improvement committee? If so, what role does the committee play with respect to CMS quality measures?


  1. [If no:] Has a specific individual been designated to work on quality issues? If so, what position or individual has been designated?


  1. Has _____ [hospital name] hired an outside consultant to help improve clinical care or patient assessments? If so, what prompted this? Please give a brief description of these change efforts.


  1. Who makes sure that changes are implemented?


  1. Do changes to improve the quality of care usually address the work of one type of provider, say, nurses or physicians? Or, are they usually multidisciplinary efforts?


  1. Are they typically rolled out unit by unit or across the whole facility all at one time?


  1. Do patients or families mention your quality scores? In what situations?


  1. [If yes on Q12—they have seen some QI efforts reflected in their hospital’s performance on quality measures over time:] How do you let others know about this improvement?


  1. Have you participated in any way in the development and selection of quality measures (e.g., through your professional association or through providing public comments)? If so, please describe.




Appendix B: Formative Interview Guide for Nursing Homes


Respondent Type

Organization Name:

Respondent Name:

Respondent Position:

Interviewer Name:

Interview Date:


INTRODUCTION AND PURPOSE OF THE INTERVIEW

Before we get started, I’d like to briefly review the purpose of this interview and the confidentiality provisions that were described in detail in the email we sent you.


  • As you know, the Centers for Medicare & Medicaid Services (CMS) uses a number of quality measures to assess the quality and efficiency of the care provided to Medicare beneficiaries. For example, CMS creates and reports quality measures in Nursing Home Compare. RAND has been asked by CMS and the Health Services Advisory Group (HSAG) to help assess how quality measures affect organizations and the care they provide.


  • We’ve come to you to help us better understand how nursing homes have experienced the CMS quality measures. Your insights will help us develop a survey that we will conduct in the future with a large of group of nursing home providers across the country.


  • We would like to ask you about the impact of these measures on the delivery of nursing home care, any unintended consequences that may have resulted, and barriers your nursing home has encountered in quality reporting and making improvements on these measures, but first we’d like to review the confidentiality provisions for this interview.


CONSENT

  • All of your responses are confidential.

  • No one outside of the research project will have direct access to the information you provide. The evaluation team will only produce summary information from our collective set of interviews. You will not be identified by name or nursing home affiliation.

  • You do not have to participate in the interview, and you can stop at any time for any reason.

  • You should feel free to decline to discuss any topic that we raise.


Do you have any questions? (Yes/No)


Do you agree to participate in the interview? (Yes/No)


As we mentioned in our email, we would like to tape the interview if that is all right with you.


Do you agree to being tape recorded? (Yes/No)


If yes: Great. Let’s get started. I’ll start the recording.


If no: That’s fine. We will take notes—and not tape the discussion. Let’s get started.


..Interview…..


We would first like to ask you a couple of questions about your position and professional background.


Respondent Background


  1. We understand that you are the _________ [title/position] in ________ [nursing home name]. Is that correct?


  1. What is your educational background?


  1. How long have you been working at ________ [nursing home name]?


  1. How long have you been the ____ [position] here?


  1. [If not already volunteered:] Did you work in any other nursing homes before ________ [nursing home name]? How long have you been working in nursing homes?


CMS Quality Measures – General

As you know, CMS requires nursing homes to report MDS data that are then used to create a number of quality measures. Data on these measures are made available to the public through the Nursing Home Compare website and are used to develop Star Ratings. Examples include percent of residents with pressure ulcers that are new or worsened (short stay) and percent of residents experiencing one or more falls with major injury (long stay).


  1. What is your role with respect to reporting and improving performance on CMS quality measures here at _____ [nursing home name]?



We’ve sent you the full list of measures under discussion, and we’d like to ask you to think generally about these measures—and how they’ve affected the quality and efficiency of care at _____ [nursing home name]. Let’s start with innovations or changes in the way care is delivered.


Innovations in Delivery of Care

  1. In your experience, has the CMS measurement program for nursing home quality led to changes to improve the delivery of care at ______ [nursing home name]]?


  1. [If no:] Why is that? [Possible prompts: Improvement has not been needed? Lack of resources?]


  1. [If yes:] Let’s talk a little more about the changes in the delivery of care. What kinds of changes has _____ [nursing home name] made to improve performance on the CMS quality measures? Would you give us a couple of examples?


  1. In working to improve in this area(s), did you monitor a particular nursing home measure? If so, which measure or measures?


  1. There are multiple aspects of the CMS quality measurement program that might motivate or drive nursing homes to undertake efforts to improve the delivery of care. I’m going to mention some specific, possible drivers of improvement and ask you to discuss the importance of each as a driver of improvement in your experience. The possible drivers include (1) public reporting of quality scores, (2) receipt of feedback reports with quality data, (3) receipt of technical assistance related to quality improvement from a CMS-contracted Quality Improvement Organization (QIO), and regulatory compliance. While I know that most nursing homes have not yet been subject to CMS Pay-for-Performance or Value-Based Purchasing programs, I’d also like to ask you to discuss how important you expect (4) the potential for financial incentives and (5) the threat of penalties to be as drivers of improvement, when these programs do start up.


How important is public reporting of quality scores as a driver of improvement?


How important is the receipt of feedback reports with quality data as such a driver?


How important is the receipt of technical assistance from a CMS-contracted Quality Improvement Organization? Is technical assistance from other sources important? Which sources?


How important is regulatory compliance as a driver of improvement? [If important:] In what ways? Could you give us an example?


How important do you expect the potential for financial incentives to be in driving future improvement efforts in your nursing home? [If important:] Please elaborate.


How important do you expect the threat of penalties to be in driving further improvement efforts?


  1. [If yes:] Which of these possible drivers—public reporting of quality scores, feedback reports, QIO technical assistance, regulatory compliance, financial incentives, or penalties—would you say is most important or potentially most important? Which is least important? [Possible prompt: corporate assistance]


  1. For the national survey we’re developing, we are considering a question that asks the respondent to rank the relative importance of each of these drivers in motivating improvement efforts (from most important to least important). Would you find this possible to do?


  1. Are there other drivers of improvement that are important in your nursing home? [Possible prompts: risk reduction? corporate pressure or policy? accreditation? working to stay competitive?]


  1. Has your nursing home initiated major system changes to policy and/or processes to expand staff ability to provide quality care and perform well on quality measures? Please give some examples. [Possible prompts: own internal incentive program, provider feedback reports, care coordination innovations, enhanced information technology, attempts to improve documentation of existing care.] [If training is mentioned, ask if focused on quality measures.]


  1. [If respondent had difficulty understanding what we meant by “major system changes to policy and/or processes:”] I just used the phrase “system changes to policy and/or processes.” What does that make you think of? Would you suggest we use a different term in the survey?


  1. Has individual staff made any changes in response to these measures? If yes, tell us what they’ve done.

  1. Thinking back over the different changes we’ve talked about, do you believe these have impacted your facility’s performance on the CMS quality measures? If so, in what ways? Please elaborate.


  1. Which efforts in particular have been associated with changes in performance over time?


Unintended Consequences

We’ll turn now to some questions on other consequences of CMS measurement programs.


  1. Have you or your organization seen any unintended consequences—either negative or positive—resulting from quality reporting? Please describe.


  1. If so, are they related to certain measures in particular? Which?


We’ve heard concerns voiced about possible unintended consequences of the CMS measurement programs. I’m going to mention five that have been raised, and ask if you’ve experienced them in any way. They include overtreatment of patients to ensure that a metric is met; improvements in areas other than those captured by the quality measures; lack of improvements in areas not measured; coding modifications in order to score better; and avoidance of sicker patients in order to achieve higher scores. I will go through each of these in turn.


  1. We’ve heard concerns that measurement programs may create a potential for overtreatment of patients—say, for example, if the pain measure leads to overuse of scheduled narcotics in some residents. Do you think this happens? With any specific measures, in particular? Do you have any examples?


  1. We’ve heard some reports that improved performance on some measures has at times spilled over to generate improvements in other clinical areas that are not part of what is measured or financially incentivized by Medicare or other payers—resulting in quality improvement across the board. Do you think this happens at _____ [nursing home name]? Would you give us an example? [Example, if needed: A focus on decreased restraint use might lead to improved performance on mobility and ADL measures.]


  1. On the other hand, nursing homes might focus all their improvement efforts on areas of care where performance is being measured and ignore or pay less attention to areas of care that are not measured. Do you think this happens? [If yes:] How does this happen? Does it happen with any specific measures in particular?


  1. [If not mentioned above:] Do you think people have modified their coding or reporting of the data to score better on quality measures? [If yes:] For any specific measures in particular?


  1. Have you heard of nursing homes avoiding sicker or more challenging patients when providing care in order to achieve higher scores on quality measures? [If yes:] Would you give us an example of the kind of scenario you’ve heard of? (You don’t need to mention any names.)


  1. [If some unintended consequences have been mentioned:] Why do you think these unintended consequences have occurred? [Possible prompts: poor measure design, large financial incentives, difficult patients, other.]


  1. Are they related to certain measures in particular? If so, which?


Barriers to Implementation

We’d now like to talk about two types of barriers that might arise—first, barriers around the reporting of data and, second, barriers to improving performance on quality measures.


  1. Have you encountered any major barriers to reporting ____ [nursing home name]’s performance on the Nursing Home Quality Initiative quality measures? Please describe. [Prompts: Inadequate IT capabilities, provider training, difficulty reporting MDS data, the measure specification, insufficient resources]


  1. With any measures or MDS element in particular?


  1. Have you experienced any major barriers to improving ____ [nursing home name]’s performance on CMS quality measures? Please describe. [Prompts: Difficulty identifying appropriate improvement strategies, difficulty identifying the appropriate process measures that lead to the outcome measures reported, provider training, insufficient resources, inadequate IT capabilities, staff turnover, lack of sufficient support or time from physicians or other staff.]


  1. With any measures in particular?


Nursing Home Reporting Background

  1. Has ________ [nursing home name] taken part in a Pay for Performance (P4P) program or demonstration?


  1. [If yes:] Is it ongoing? Who administers(ed) it ? What measures does it focus on?


  1. Did ________ [nursing home name] participate in Medicare’s prototype Quality Assurance & Performance Improvement or QAPI demonstration?


  1. Has _____ [nursing home name] participated in any other quality measure reporting programs? Please specify.


  1. [If so:] Have some of the various quality measure reporting programs had greater effect on the quality and efficiency of care at _____ [nursing home name], than others? Which ones? Why do you think this is the case?


  1. In preparing for the upcoming QAPI initiative, has your nursing home identified which improvement intervention it will undertake?


  1. Can you describe what changes you have made or are planning to make as a result of QAPI?


  1. What, if any, CMS nursing home measures are you planning to use or monitor in your QAPI program?


Identification of Survey Respondents

  1. As we mentioned at the beginning of the interview, we plan to conduct a large national survey of nursing home providers on their experiences with CMS quality measures. In an organization such as yours, who would you say is the most appropriate person to direct it to? [Prompt: nursing home administrator, director of nursing, medical director?] Would more than one person need to provide the information to fully complete a survey?


Lessons Learned

  1. What have been the most important lessons learned to date from participating in the CMS Nursing Home Quality Initiative or Nursing Home Compare measures program?

  2. Have these lessons led to any changes in the way things are done at _____ [nursing home name]?


  1. Do you have any experiences or concerns around CMS measurement programs that we haven't discussed that you would like to raise?


  1. Based on your experience to date using CMS nursing home measures, what changes to the measures or the reporting program would you recommend? Any changes you’d really like to see?




Additional Questions if Time Allows

Re: Barriers to Implementation:


  1. What actions have been taken to address or reduce the barriers you mentioned around reporting data or improving performance on quality measures?


Re: Unintended Consequences:


  1. [If no to Q15—have not encountered any negative or positive consequences:] Have you had concerns that some negative consequences might occur? [If yes:] What concerns have you had?


  1. Have other administrators or staff at _____ [nursing home name] raised concerns about possible negative consequences of these quality measures? Would you describe these concerns?


  1. Has _____ [nursing home name] modified any reporting procedures in response to unintended consequences?


Re: Innovations in Delivery of Care:


  1. How do you think your staff understands the CMS measure program and how it works?


  1. Would you tell us a little about how these changes to improve the delivery of care are initiated and undertaken?


  1. Does _____ [nursing home name] have a quality performance improvement committee? If so, what role does the committee place with respect to CMS quality measures?


  1. [If no:] Has a specific individual been designated to work on quality issues? If so, what position or individual has been designated?


  1. Has _____ [nursing home name] hired an outside consultant to help improve clinical care or resident assessments? If so, what prompted this? Please give a brief description of these change efforts.


  1. Who makes sure that changes are implemented?


  1. Has _____ [name of nursing home] initiated any changes to improve care transitions? To reduce psychotropic med use? If so, please describe. What prompted this focus?


  1. Do changes to improve the quality of care usually address the work of one type of provider, say nurses or certified nursing assistants? Or, are they usually interdisciplinary efforts?


  1. Are they typically rolled out unit by unit or across the whole facility all at one time?


  1. In the past year, have improvement efforts at _____ [nursing home name] focused mainly on short-stay residents, long-term residents, or all residents?


  1. Do residents or families mention your quality scores? In what situations?


  1. [If yes on Q13—they have seen some QI efforts reflected in their nursing home’s performance on quality measures over time:] How do you let others know about this improvement?


  1. Have you participated in any way in the development and selection of quality measures (e.g., through your professional association or through providing public comments)? If so, please describe.




Appendix C: Cognitive Interview Guide for Hospitals/Nursing Homes


Respondent Type

Organization Name:

Respondent Name:

Respondent Position:

Interviewer Name:

Interview Date:


INTRODUCTION AND PURPOSE OF THE INTERVIEW

Thank you for agreeing to participate in this interview today.


Before we get started, I’d like to briefly review the purpose of this interview and confidentiality.


  • Every three years, the Medicare program is required by Congress to conduct an assessment of the impact of Medicare’s use of performance measures. To understand the impact of these programs on providers, the Medicare program is planning to conduct a national survey of providers. RAND has been tasked by CMS with designing a survey for hospitals, nursing homes, home health agencies, and ambulatory care settings that can be used to collect information on how CMS performance measures affect [INSERT PROVIDER TYPES] and the care they provide, issues or challenges associated with reporting of CMS measures, factors that drive investments in performance improvement, and barriers and facilitators to improvement on performance measures. The survey also asks about negative effects of the use of CMS measures on quality of care in your organization. We are conducting a small number of interviews with [ORGANIZATION TYPES) to assess the draft survey.


  • Recently we sent you a survey asking about your organization’s experiences with CMS quality measures. [IF COMPLETED THE SURVEY: Thank you for taking the time to fill out that survey]. Today I am going to be asking you questions about the survey to make sure that the questions on the survey are clear and capture your organization’s experience in reporting CMS measures. We are developing this survey to use with a large group of [ORGANIZATION TYPES] across the country in the future. Your responses to the survey and the feedback you provide will be used to refine and improve the survey.


  • The interview today should take about an hour. During the interview, we will be taking notes and, with your permission, would also like to record the interview.


  • To thank you for taking the time to participate in the interview, we will be sending you a [check/gift card] for $300.


  • We would like to ask you some specific follow up questions throughout the survey, but first we’d like to review the confidentiality provisions for this interview.


CONSENT

  • All of your responses are confidential.

  • No one outside of the research project will have direct access to the information you provide. The evaluation team will only produce summary information from our collective set of interviews.

  • You will not be identified by name or organizational affiliation in the summary report produced from these interviews. We will also not identify by name the organizations that are represented in the interviews.

  • You do not have to participate in the interview, and you can stop at any time for any reason.

  • Feel free to decline to discuss any topic that I raise in the course of the interview.

  • If there is a particular question you don’t want to answer, just let me know and we’ll skip to the next one.

  • After the study is completed, we will destroy the interview notes and the recording of the interview.

  • If you have any questions or concerns about this project, please contact Beverly Weidmer, Senior Survey Director, at 310-393-0411, ext. 6788, or via email at Beverly_Weidmer@rand.org.

  • If you have any questions about your rights as a research subject, please contact the RAND Human Subjects Protection Committee at 310-393-0411, ext. 7173, and ask to speak to Jim Tebow.


Do you have any questions? (Yes/No)


Do you agree to participate in the interview? (Yes/No)


As we mentioned in our email, we would like to tape the interview if that is all right with you.


Do you agree to be tape recorded? (Yes/No)


If yes: Great. Let’s get started. I’ll start the recording.


If no: That’s fine. We will take notes and not tape the discussion. Let’s get started.



COGNITIVE INTERVIEW PROBES

Question 1

  • Tell me about more about your response to question 1.

  • IF IMPROVED IN SOME BUT NOT ALL: Which measures did you improve on? Are there any measures where your performance declined in the last 12 months?


Question 2

  • Tell me why you answered [response].

  • Why or why not?


Question 3

  • Tell me why you answered [response].

  • What has made it difficult to improve in [X]?


Question 4

  • Did you have any difficulty answering this question? IF YES: Tell me about that.

  • Are there any items in this question that you don’t think apply to your nursing home? IF YES: Which ones?

  • Tell me more about [type of change] that has improved your performance.


Question 5 and 6

  • Tell me why you answered [response].

  • PROBE ON ALL RESPONSES CHECKED IN QUESTION 6: Tell me more about that.


Question 7

  • Did you have any difficulty answering this question? IF YES: Tell me about that.

  • How did you decide how to rank order the four factors?

  • Are there any other factors that have influenced your decision to invest in trying to improve your nursing home’s performance on CMS measures?


Question 8

  • Did you have any difficulty answering this question? IF YES: Tell me about that.

  • Which of the factors listed in this question has been the most important in improving your performance on CMS performance measures?

  • Which has been the least important?

  • Is there any one factor from this list that has actually hindered or impeded your ability to improve your performance?


Question 9-10

  • Tell me more about the barriers that have impeded your performance improvement?

  • Is there any one factor that has negatively affected your performance the most?


Question 11

  • Tell me more about this.

  • What other clinical areas have been affected by the CMS performance measures?

  • How?


Question 12

  • Tell me more about that. What undesired effects?


Question 13

  • Tell me more about your responses to this question (go through each sub item in the question).

  • Do you have any concerns about answering this question?

  • Do you have any concerns about answering questions on negative effects of CMS performance measures such as those mentioned in item b, d, and f?

  • If you got this survey in the mail, would you answer this type of question?

  • Do you think that other nursing homes are likely to report in the survey that some of these negative effects are happening within their organization?


Question 14

  • Probe on any “no” response to each sub item: Tell me more about that.


Question 16-17

  • Probe on any sub-item: Tell me more about that.


Question 18-21

  • Tell me more about this.

  • Tell me about your board’s engagement or interest in the CMS performance measures.


Question 22-23

  • Did you have any difficulty answering these questions? IF YES: Tell me about that.

  • What were you thinking about as you answered this question?

  • How did you pick a number?


Question 30

  • Tell me about the ACOs this nursing home participates in.


Question 31

  • If yes: What other payment models?


Question 32

  • Tell me about the other quality reporting programs this nursing home participates in?


Question 35-38

  • Tell me more about your EHR.

  • GO THROUGH SUB ITEMS IN Q. 37: Tell me more about this.

  • Question 38: Would you say your EHR helps or hinders reporting of quality measures?


SURVEY PROCESS QUESTIONS

1. Are there any other issues related to the implementation of the CMS Quality Measures that were not covered in this survey? If yes, briefly describe them below.


1 Yes

2 No If “No”, go to question 3.



2. Other issues:





3. How familiar are you with the CMS Performance Measures?


1 Very familiar

2 Familiar

3 Not very familiar

4 Not at all familiar


4. How familiar are you with the steps your organization has taken to implement the CMS Performance Measures?


1 Very familiar

2 Familiar

3 Not very familiar

4 Not at all familiar


5. How familiar are you with the impact (positive or negative) the CMS Performance Measures has had on the quality of care your hospital delivers?


1 Very familiar

2 Familiar

3 Not very familiar

4 Not at all familiar


6. After completing the survey, do you feel that you are the most appropriate person to complete the survey?


1 Yes

2 No If “No,” go to question 8.


7. IF NO: Who should the survey be sent to instead? (You do not need to provide a name, but rather a job description or job title).





8. Were you able to complete the survey entirely on your own or did you have to consult others within your organization?


1 Completed the survey on my own If “Yes,” go to question 10.

2 Completed the survey with others within my organization


9. If Others:

Who did you have to consult? (Please provide the job title or job description of the people you consulted, as well as the department they work in).





10. Would you prefer to complete this survey by mail or over the Web?


1 Mail

2 Web

3 Either one is fine



11. Were any of the questions in the survey unclear or confusing?


1 Yes

2 No If “No,” go to question 13.



12. IF YES: Which ones?







13. Were any of the questions in the survey difficult to answer?


1 Yes

2 No


14. IF YES: Please tell us which ones and briefly describe why.






15. How long did it take you to complete the survey? (your best estimate is fine).



These are all the questions that I have for you. Thank you for completing the survey and for allowing me to talk to you about the survey. To thank you for your time, we will send you a check for $300. You should get the check within the next 2 weeks. If you have any other comments or any questions or concerns about this study, please contact Beverly Weidmer by phone at 310-393-0411, ext. 6788, or via email at Beverly_Weidmer@rand.org.


INTERVIEWER: VERIFY THE NAME AND ADDRESS OF THE PERSON WHO WILL RECEIVE THE CHECK.

Citations

1. Damberg, C.L., et al., Measuring Success in Health Care Value-Based Purchasing Programs: Summary and Recommendations, 2014, RAND Corporation: Santa Monica, CA.

2. Ivers, N., et al., Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev, 2012. 6: p. CD000259.

3. Jones, S.S., et al., Health information technology: an updated systematic review with a focus on meaningful use. Annals of internal medicine, 2014. 160(1): p. 48-54.

4. Totten, A.M., et al., Closing the quality gap: revisiting the state of the science (vol. 5: public reporting as a quality improvement strategy). Evid Rep Technol Assess (Full Rep), 2012(208.5): p. 1-645.

5. AHRQ, AHRQ Conference on Health Care Data Collection and Reporting: collecting and reporting data for performance measurement: moving toward alignment., in AHRQ Conference on Health Care Data Collection and Reporting: Report of Proceedings. 2007, U.S. Agency for HealthCare Research and Quality.

6. Craig, T.J., J.B. Perlin, and B.B. Fleming, Self-reported performance improvement strategies of highly successful Veterans Health Administration facilities. Am J Med Qual, 2007. 22(6): p. 438-44.

7. Stone, E.G., et al., Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med, 2002. 136(9): p. 641-51.

8. Byrnes, J. and J. Fifer, The people structure of quality improvement. Healthc Financ Manage, 2010. 64(3): p. 64-70.

9. Castle, N.G. and J. Engberg, The influence of staffing characteristics on quality of care in nursing homes. Health Serv Res, 2007. 42(5): p. 1822-47.

10. Horn, S.D., et al., Beyond CMS quality measure adjustments: identifying key resident and nursing home facility factors associated with quality measures. J Am Med Dir Assoc, 2010. 11(7): p. 500-5.

11. Castle, N.G., J. Engberg, and A. Men, Nursing home staff turnover: impact on nursing home compare quality measures. Gerontologist, 2007. 47(5): p. 650-61.

12. Konrad, Thomas R., et al., Workplace Interventions, Turnover, and Quality of Care Report North Carolina Institute on Aging, Editor 2009, University of North Carolina: Chapel Hill, NC.

13. Bravata, D.M., et al., in Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 5: Asthma Care). 2007: Rockville (MD).

14. Shojania, K.G., et al., in Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 2: Diabetes Care). 2004: Rockville (MD).

15. Walsh, J., et al., in Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 3: Hypertension Care). 2005: Rockville (MD).

16. Kawamoto, K., et al., Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ, 2005. 330(7494): p. 765.

17. Petersen, L.A., et al., Does pay-for-performance improve the quality of health care? Ann Intern Med, 2006. 145(4): p. 265-72.

18. Flodgren, G., et al., An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database Syst Rev, 2011(7): p. CD009255.

19. Fung, C.H., et al., Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med, 2008. 148(2): p. 111-23.

20. Casalino, L.P., et al., General internists' views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood), 2007. 26(2): p. 492-9.

21. Pesis-Katz, I., et al., Making difficult decisions: the role of quality of care in choosing a nursing home. Am J Public Health, 2013. 103(5): p. e31-7.

22. Grabowski, D.C. and R.J. Town, Does information matter? Competition, quality, and the impact of nursing home report cards. Health Serv Res, 2011. 46(6pt1): p. 1698-719.

23. Mukamel, D.B., et al., Publication of Quality Report Cards and Trends in Reported Quality Measures in Nursing Homes. Health Services Research, 2008. 43(4): p. 1244-1262.

24. Grimshaw, J.M., et al., Changing provider behavior: an overview of systematic reviews of interventions. Med Care, 2001. 39(8 Suppl 2): p. II2-45.

25. Unverzagt, S., et al., Strategies for guideline implementation in primary care focusing on patients with cardiovascular disease: a systematic review. Fam Pract, 2013.

26. Adams, D.A., R.R. Nelson, and P.A. Todd, Perceived usefulness, ease of use, and usage of information technology: a replication. MIS quarterly, 1992: p. 227-247.

27. Rollow, W., et al., Assessment of the Medicare quality improvement organization program. Ann Intern Med, 2006. 145(5): p. 342-53.

28. Snyder, C. and G. Anderson, Do quality improvement organizations improve the quality of hospital care for Medicare beneficiaries? JAMA, 2005. 293(23): p. 2900-7.

29. Bradley, E.H., et al., From adversary to partner: have quality improvement organizations made the transition? Health Serv Res, 2005. 40(2): p. 459-76.

30. Addington, D., et al., Facilitators and barriers to implementing quality measurement in primary mental health care: Systematic review. Canadian Family Physician, 2010. 56(12): p. 1322-1331.

31. Sloane, P.D., et al., How eight primary care practices initiated and maintained quality monitoring and reporting. J Am Board Fam Med, 2011. 24(4): p. 360-9.

32. Wilkinson, E.K., et al., Reactions to the use of evidence-based performance indicators in primary care: a qualitative study. Qual Health Care, 2000. 9(3): p. 166-74.

33. McColl, A., et al., Clinical governance in primary care groups: the feasibility of deriving evidence-based performance indicators. Qual Health Care, 2000. 9(2): p. 90-7.

34. IOM, Clinical Data as the Basic Staple of Health Learning: Ideas for Action, in Clinical data as the basic staple of health learning: Creating and protecting a public good: Workshop summary, I.o.M.R.o.V.S.-D.H. Care, Editor. 2010, National Academies Press: Washington DC. p. 249-268.

35. Shaller, D., Implementing and using quality measures for children's health care: perspectives on the state of the practice. Pediatrics, 2004. 113(1 Pt 2): p. 217-27.

36. Curtright, J.W., S.C. Stolp-Smith, and E.S. Edell, Strategic performance management: development of a performance measurement system at the Mayo Clinic. J Healthc Manag, 2000. 45(1): p. 58-68.

37. Greiver, M., et al., Measuring data reliability for preventive services in electronic medical records. BMC Health Services Research, 2012. 12(1): p. 116.

38. Filipova, A.A., Electronic health records use and barriers and benefits to use in skilled nursing facilities. Comput Inform Nurs, 2013. 31(7): p. 305-18.

39. de Vos, M., et al., Using quality indicators to improve hospital care: a review of the literature. Int J Qual Health Care, 2009. 21(2): p. 119-29.

40. Halladay, J.R., et al., Cost to primary care practices of responding to payer requests for quality and performance data. Ann Fam Med, 2009. 7(6): p. 495-503.

41. de Vos, M.L., et al., Implementing quality indicators in intensive care units: exploring barriers to and facilitators of behaviour change. Implement Science, 2010. 5: p. 52.

42. Blozik, E., et al., Simultaneous development of guidelines and quality indicators–how do guideline groups act?: A worldwide survey. International journal of health care quality assurance, 2012. 25(8): p. 712-729.

43. Hashjin, A.A., et al., Using quality measures for quality improvement: the perspective of hospital staff. PLoS One, 2014. 9(1): p. e86014.

44. Federman, A.D. and S. Keyhani, Physicians' participation in the Physicians' Quality Reporting Initiative and their perceptions of its impact on quality of care. Health Policy, 2011. 102(2-3): p. 229-34.




1 QAPI is a data-driven, proactive approach to improving the quality of life, care, and services in nursing homes. http://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/QAPI/qapidefinition.html

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleFormative Interview Chapter
SubjectNational Provider Survey Formative_Cognitive Summary
AuthorPearson, Marjorie
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy