HRSA Responses to Questions on
OAT Telehealth Outcome Measures Clearance Package
General question: will OAT be validating any of the self-reported data?
Yes, prior reporting submissions are compared to the current reporting submission to detect discrepancies or errors.
[OMB] This response sounds like self-reported data will be compared to self-reported data. This is not “validation” in the typical sense. Will self-reported data, in fact, be validated: i.e. assessing the extent to which self-reported data is actual/correct?
[HRSA] OAT staff validates data through reviews of grantee submissions, meetings with grantees, and site visits, when applicable. In addition, HRSA’s Office of Performance Review independently reviews grantee organizations on a periodic basis. As our grantees are targeted for review, we ask OPR to review certain performance measures to independently assess performance.
Page 2 of instructions: please define for the respondent what “active site” means.
Active sites are sites that are currently providing services during this reporting period.
[OMB] Please explain this in the instructions/forms for the respondents.
[HRSA] Response: The user handbook is currently being updated to include the additional instructions.
Page 3 (and elsewhere): in a number of places, these instructions say something to the effect of “To complete this form, you will need to have been <doing x>.” What are respondents supposed to do if they haven’t been doing x?
The following text will be added where appropriate:
If x does not apply, please do not complete the form.
Page 4: Please clarify for the respondent what the difference is between “interactive/real-time encounters” and “patient-present encounters.”
Page 4: Please clarify for respondent how it is possible to have an interactive encounter if the patient is not present.
Definitions - Encounter Types: (on each form a link is available that provides instructions).
• Interactive/Real-Time Encounters (IN): Encounters done in an interactive (real-time) video-conferencing format.
• Patient-Present Encounters (PP): Interactive encounters where the patient is present during the consultation.
• Patient-Not-Present Encounters (NP): Interactive encounters where the patient is not present during the consultation.
In telemedicine, interactive encounters can occur between a patient and a distant clinician or can occur between two health care providers discussing a patient, consulting about medication, etc. Both are interactive, in that the encounter is in real-time, but only one type involves the patient being present.
[OMB] This response sounds like all 3 types of encounters are “interactive.” If the key difference between IN, PP, and NP is that IN is conducted by video-conference, wouldn’t it be more clear to call this something like “video-conference encounters”?
[HRSA] Response: The terminology used throughout the user handbook is recognized throughout the field of telemedicine. All three are interactive, with the latter two definitions being refinements of interactive encounters. The key difference is whether the patient is present or not. All are in real-time. Changing the definition could create confusion.
Page 13: why are the outcomes listed here the only outcomes examined for the chronic diseases except diabetes? For example, why not collect cholesterol level data for people with CHF?
We intend to expand the list of clinical outcome measures to other conditions, such as CHF, but we are starting with diabetes and HgA1C, based on discussions with the grantees and our evaluation advisors. The A1C measure is deemed the gold standard for diabetes and there is a consensus that any program that claims to be providing diabetes management services should be able to measure or obtain A1C measures for their patients/clients. Thus, it is an ideal performance measure to begin with.
Page 15: why is there a whole section for just dermatology?
Dermatology is one area where telemedicine is very promising, but the data have not been well-documented, especially in looking at store and forward applications. We hope to be able to gain greater insights into this very promising telemedicine service through consistent and reliable information from a variety of programs.
Page 16: in addition to collecting data about A1c 7.0%, how about collecting information about % improvement (e.g. 1% drop in A1c). Achieving A1c 7.0% is a very high standard.
This is a high standard, but to date, this is the clinical “gold standard” of diabetes control. Based on the OMB comments, we are proposing to revise the measure, however, and collect patients in three categories, patients in good control, patients at risk, and patients out of control. Thus, we will begin to focus on the following three measures relating to A1C: (1)patients with lab values of 7% or less (good control), (2) those with lab values above 7%, but less than 9%(at risk) and (3)those with values at 9% or more (not in control. Based on these data, we will determine whether it is feasible to track patients in a reliable fashion to determine the % improvement figure. At this point, we are concerned and wish to verify that we can get reliable point-in-time data for this measure.
[OMB] How similar are the communities who use this system? (e.g. are they all/mostly rural, are they all/mostly underserved areas, are they all/mostly ethnic communities)
If the communities are not similar, how will HRSA ensure that the public reporting of these outcome results will not lead to cream-skimming? Unless the results are risk-adjusted, these outcomes will be measuring not only the quality of care provided to patients but it will also be “contaminated” by other health determinants like income, race, education level, etc.
[HRSA] Response: Under the Telehealth Network Grant Program (TNGP) grants, the communities served are all rural. TNGP’s identify sites that have health professional shortage areas and the vast majority is underserved.
[OMB] What about the ethnic mix of areas served by TNGP grants? Are they similar?
To the extent that there is variation across TNGP grant areas (i.e. some areas are not “underserved,” some rural areas will be mostly White while others will have a large Hispanic migrant worker community, etc), how will HRSA use this information in interpreting outcomes data? Will there be some stratification of outcome data by SES/race/other determinants of risk?
Also, please clarify how HRSA will use the information about clinical outcomes. Will HRSA publicly report these results on a grantee by grantee basis in a way that is similar to provider profiling (e.g. 10% of diabetics at grantee X had A1c < 7.0%, while 90% of diabetics at grantee Y had A1c < 7.0%)?
[HRSA] We will not publish data by individual grantees, but will provide each grantee with a comparison of its profile to the performance across all grantees. Findings are reported across all grantees to create the yearly OAT GPRA Report which will be available to the public on the HRSA Website. We recognize that different populations may have different profiles, but we are looking to provide each grantee with a measure of its performance against the standard over time. We are looking for improvement over time, but recognize that we will be dealing with different populations and population changes. However, we do not have a “black” standard or an “Asian” standard; we have a standard that all should aim to achieve. This is not a research study but a standard program performance measurement tool. If we note serious anomalies in the data, we will then go back to the grantees and seek clarification, which may or may not require the grantee to examine changes in racial or other patient population that might explain the anomalies.
[OMB] Will HRSA be financially rewarding or penalizing grantees that report better results? (e.g. more grant funding for grantee Y than grantee X)?
[HRSA] No, grantees will not be penalized for the results. OAT wants to maintain the trust of each grantee and lessen the possibility of receiving false data from grantees who might falsify data or cherry-pick patients in order to avoid a financial penalty. Instead, OAT analyzes the data to evaluate programs strengths and weaknesses, allowing OAT the ability to structure personalized technical assistance geared towards the grantee to improve outcomes and services to the communities.
[OMB] What will happen to grantees who report that the majority of their patients have A1c > 9.0% Is there a threshold at which such penalties will be incurred? What is that threshold: is it >9.0% or >7.0%?
[HRSA] Again, no penalties are placed upon the grantee. Instead, OAT discusses the findings with the programs to determine the cause for not improving control in their population. We are seeking to reduce the number of patients above 9% and increase those with an A1c below 7%. We do not review individual patient information; instead we look at the program service. Specifically, we need to account for factors that adversely affect the outcomes. The possibilities are numerous as to why improvements in this performance measure are not achieved. OAT reviews the possibilities with the program and creates action plans based on various scenarios.
Page 17: are those patients who aren’t tested for A1c at all counted among those as people who have A1c>7.0%?
No, this measure relates to patients who are in a diabetes control program and had their A1C measured.
[OMB] Doesn’t this create an incentive for providers to NOT test patients who they know are in poor control so that they can report only on those patients who have good A1c results? It would seem to make sense to collect process measures for A1c (e.g. frequency of testing) alongside the outcome measure (%A1c) for this reason. The ADA currently recommends that type 2 diabetics get tested at least twice a year (though this can be less frequent if the patient’s A1c is sufficiently low). Type 1 diabetics should get tested 4 times/year (though this can be less frequent if the patient’s A1c is sufficiently low).
[HRSA] If a program is funded to provide a diabetic service under the grant, they are expected to test all patients on that service. A reasonable validity check is to collect data on the frequency of testing and we are exploring doing so with our contractor.
[OMB] OMB would strongly recommend collecting process data (e.g. frequency of testing) alongside outcome data (e.g. A1c test result).
[HRSA] We concur with this recommendation and will make adjustments in the data collection.
File Type | application/msword |
File Title | • |
Author | HRSA |
Last Modified By | Matsuoka_k |
File Modified | 2007-10-22 |
File Created | 2007-10-22 |