Evaluation of Venous Thromboembolism Prevention Practices in U.S. Hospitals
OMB/agency number assigned (if applicable)
Supporting Statement B-New
Nimia L. Reyes, MD, MPH
Medical Officer
Centers for Disease Control and Prevention (CDC)
National Center on Birth Defects and Developmental Disabilities (NCBDDD)
Division of Blood Disorders
Phone: (404) 498-6733
Fax: (404) 498-6798
Email: NReyes@cdc.gov
September 17, 2020
Table of Contents
B.1. Respondent Universe and Sampling Methods3
B.2. Procedures for the Collection of Information5
B.3. Methods to Maximize Response Rates and Deal with No Response5
B.4. Tests of Procedures or Methods to be Undertaken7
B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 9
B. Collections of Information Employing Statistical Methods
The universe of eligible hospitals (aka respondent universe) comprises all adult general medical and surgical hospitals in the United States and District of Columbia, with the exception of those hospitals located in U.S. territories and overseas (n=4605). A representative sample of eligible hospitals will be identified using the American Hospital Association (AHA) database as the sampling frame.
Sample size estimates: Using the most conservative approach to estimate the sample size, and using an anticipated proportion of 0.50, a confidence level of 95%, and an absolute precision of 5 percentage points, the estimated desired total number of respondents is 384.
Based on an anticipated 30% response rate, one would need to invite a total of 1280 hospitals (384 divided by 0.30) in order to achieve the desired sample size of 384 respondents. However, if the anticipated response rate is lower (e.g., 25% due to continued impact of COVID-19 on hospital participation rate), the total number of invited hospitals increases to 1536. If the anticipated response rate is higher (e.g., 50%), the total number of invited hospitals decreases to 768.
In order to have the same level of precision in each of the size categories (equal number of respondents in each size when using a 30% response rate estimate), the total number of invited hospital (1280) is divided by 3. This equals 427 hospitals in each size strata, which was rounded to 430 for sample selection as shown in Table 1.
Method used to select the stratified random sample: The respondent universe, all adult general medical and surgical hospitals, and all relevant variables/characteristics (i.e., location, ownership, teaching status, etc.) were pulled from the AHA database. To obtain the sample, a stratified sampling method was used in order to ensure that our sample best represents our respondent universe. This method helps to ensure that all subgroups of interest are equally accounted for in the sample.
The stratified random sample was pulled using the following process:
In Excel, a random number (using the Rand function) was assigned to each organization in the entire respondent universe.
The respondent universe was then divided into the three strata (small, medium, and large).
For each stratum, the assigned random number was sorted in ascending order
The first 430 hospitals from each stratum were included in the sample. A description of the hospitals that were included in the final sample are in the table below.
Table 1. Distribution of Adult General Medical and Surgical Hospitals in AHA Database Compared to Sample |
||||||
|
Adult General Medical/Surgical Hospitals (n=4605) |
Sample (n=1290) |
||||
Hospital Bed Size (AHA bed count) |
Small |
Medium |
Large |
Small |
Medium |
Large |
Respondent Universe, n (%) |
2331 (50.6) |
1802 (39.1) |
472 (10.2) |
430 (33.3) |
430 (33.3) |
430 (33.3) |
Teaching Status, n (%) |
|
|
|
|
|
|
Teaching |
401 (17.2) |
1099 (61.0) |
451 (95.6) |
69 (16.0) |
271 (63.0) |
410 (95.3) |
Non-teaching |
1930 (82.8) |
703 (39.0) |
21 (4.4) |
361 (84.0) |
159 (37.0) |
20 (4.7) |
Location*, n (%) |
|
|
|
|
|
|
Rural |
1527 (65.5) |
310 (17.2) |
6 (1.3) |
273 (63.5) |
64 (15.0) |
6 (1.2) |
Urban |
783 (33.6) |
1488 (82.6) |
466 (98.7) |
151 (35.1) |
365 (85.0) |
424 (98.6) |
Ownership, n (%) |
|
|
|
|
|
|
Government |
822 (35.3) |
290 (16.1) |
99 (21.0) |
140 (32.6) |
70 (16.3) |
87 (20.2) |
Non-government |
1226 (52.6) |
1161 (64.4) |
332 (70.3) |
231 (53.7) |
274 (63.7) |
307 (71.4) |
Investor |
283 (12.1) |
351 (19.5) |
41 (9.7) |
59 (13.7) |
86 (20.0) |
36 (8.4) |
*Nine hospitals in the population have no location identified; six small hospitals and one medium hospital in the sample have no location identified
The participants in the project will complete a questionnaire that is administered electronically through the QualtricsXM (Provo, UT, https://www.qualtrics.com) platform. There will be no face to face interaction. An implementation email (Attachment 5) will be sent to each eligible participant. The email includes a hyperlink to the Project Information Sheet (Attachment 5) that describes in more detail what the project entails and why it is important, as well as a hyperlink to a pdf of the questionnaire. The information sheet includes the elements needed for online survey consent.
How respondents are chosen: We used a stratified sampling method in order to ensure that our sample best represents our respondent universe. This method allows us to ensure that all subgroups of interest are equally accounted for in the sample. Through pilot testing, we identified the target respondent as the Director of Quality and Safety (or equivalent title) at each hospital.
Several methods will be used to maximize response rates. The study team will reach out directly to the target respondent using accurate contact information. Because titles for the target respondent vary across hospitals, we will utilize a vendor with whom we already have established relationships to obtain the name and email address for the person serving the role of Director of Quality and Safety. This vendor is Definitive Healthcare (Framingham, MA. URL https://www.definitivehc.com). This company provides detailed up-to-date information on healthcare providers at several leadership levels including of department heads. The representative sample from the AHA database will be matched with Definitive Healthcare’s database which includes email addresses for the target respondent.
With regard to tracking respondents, the QualtricsXM (Provo, UT, https://www.qualtrics.com) platform can provide information on the number of target respondents that have received, opened, started and completed the questionnaire. We will use this information to monitor online questionnaire uptake. We will send a reminder two weeks after the initial QualtricsXM (Provo, UT, https://www.qualtrics.com) link is sent out and a second reminder two weeks following that. The team will continually monitor uptake to inform decisions on the need or nature of subsequent reminders. For example, follow-up with respondents who have started but not completed the questionnaire may be prioritized in reminders sent later in the implementation process.
To enhance the perception of authenticity and boost response rate, we will include email and telephone contact information of the research project director in all communications to targeted respondents. This will allow respondents the ability to verify authenticity if in doubt by directly contacting the research team. We will also announce the project through our Department of Communications to alert hospitals that they may be invited to participate.
The possibility of response bias will be thoroughly examined during data analysis. After the survey implementation period, preliminary analyses will include basic aggregate descriptive statistics for each question (frequencies and means etc. as appropriate). The team will then select key questionnaire items for further examination. Using matched demographic data from the AHA dataset obtained during the sample selection process, we will stratify the data to compare frequencies and means by groupings of hospital characteristics such as bed size categories, teaching status and rural/urban location. As appropriate, multivariate regression analyses will be performed to identify factors that most influence the key items.
All analyses will be conducted using SAS statistical software, version 9.4 (SAS Institute, Cary, North Carolina) and R Foundation for Statistical Computing, Vienna, Austria (R Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/). The number of missing responses will be reported in the text whenever 10 or more responses were missing. A 2-tailed P value of less than 0.05 will be used to indicate statistical significance.
To examine the possibility of response bias, we will compare respondents and non-respondents from the original sample on demographic characteristics. If response rates differ by hospital size or other important demographic characteristics, statistical adjustments will be made to enhance the external validity of the findings. Sampling weights will be used to adjust the results for nonresponse.
To determine the weights, we will use logistic regression to estimate the probability that a sampled hospital had completed the survey as a function of bed size, location (urban or rural), or other key hospital demographic variables. The estimated response probabilities from this regression will then be grouped into 12 weighted adjustment classes so that the number of responses within each class was at least 20 and the units within each class are as similar as possible, based on the estimated probabilities. The inverse of the average predicted probability of response within each weighted adjustment class will then be used as the weight. The means and 95% confidence intervals (Cls) for each item, both overall and stratified by the demographic characteristics, will then be calculated by using these sampling weights. The association of the key items with each of the demographic characteristics will be determined by using weighted chi-square tests (in these weighted analyses, PROC SURVEYFREQ and PROC SURVEY MEANS statements will be used).
We conducted a formal pilot test during April through July 2020. The pilot sites were not randomly selected because recruitment began during the early months of the COVID-19 outbreak. The recruitment process ran from April 28, 2020 to June 3, 2020 and 9 hospitals expressed interest and agreed to pilot test an electronic data collection instrument for this study (Attachment 6). The recruitment plan for pilot testing included the following steps:
We created a list of candidates that comprised people, at eligible hospitals, that the research team knew from previous projects.
We wrote an article for the Joint Commission Perspectives May edition (Attachment 7), released on April 28, 2020, informing hospitals about the project and requesting pilot site volunteers. This communication resulted in two hospitals expressing interest.
We also put a notification in the JC Online communication (Attachment 7) on May 20, 2020, informing hospitals about the project and requesting pilot site volunteers. This communication resulted in five hospitals expressing interest.
For those that expressed an interest, we sent an email that included the project information sheet (Attachment 5).
When a site agreed to participate, they were sent another email with the link to the questionnaire, a PDF version of the questionnaire and questions that we wanted feedback on in order to improve the questionnaire.
The questions we sought feedback on included: the time required to complete the questionnaire, whether the questions and response categories were appropriate, ease of completion with electronic platform and opportunities for improvement.
If needed, follow-up emails were sent to the pilot sites as a reminder to submit their responses via QualtricsXM (Provo, UT, https://www.qualtrics.com) and to provide feedback on the structure/content of the questionnaire. Three sites requested phone calls to discuss their feedback.
Pilot testing the draft questionnaire in QualtricsXM (Provo, UT, https://www.qualtrics.com) was completed by 9 sites of various sizes and locations on July 10, 2020. The pilot sites completed the online questionnaire and provided feedback on the following: identify the title or role of the person who should be the primary respondent, describe other staff needed to complete questionnaire, estimate the time required to complete the questionnaire, identify unclear questions and missing or overlapping response categories, identify if there is a need to add an “Unknown” response category to additional questions, identify any questions that are unnecessary, determine if the flow of the questionnaire is easy to navigate, and provide general suggestions for improvement.
We also learned information about the accuracy and reliability of the questions during pilot testing. We had three calls with pilot sites to follow-up on unclear responses and for feedback and revised questions accordingly. Improvements were made to the questionnaire following the feedback that was provided and after reviewing the responses on the draft questionnaire.
Based on the pilot test of the electronic questionnaire (N=9) the average time to complete the questionnaire was 61 minutes (min=20; max=150, median=60, range=130, standard deviation=37). We learned that the reasons why the participant had a maximum time of 150 minutes was related to the process they used to complete the questionnaire. The person who received the questionnaire was not involved in the day to day VTE practices. They completed the paper version first which included conference calls, research, and talk to other team members to fill help answer questions. The participant who took the minimum amount of time (20 minutes) was involved in the daily VTE activities and needed very little input from other staff to complete the questionnaire.
We also examined the responses to other questions and found good variability across the pilot sites. Additional details and aggregate findings from pilot testing are available upon request.
Individual consulted on statistical aspects of the design:
Stephen Schmaltz, PhD
Associate Director, Senior Biostatistician
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630-792-5243
Individuals from The Department of Research at the Joint Commission who will be involved with collecting the data:
Barbara I Braun, PhD
Associate Director
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630-792-5928
Michele Bozikis, MPH
Project Manager
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630-268-2660
Salome Chitavi, PhD
Project Director
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630.792.5977
SChitavi@jointcommission.org
Karen Kolbusz, RN, BSN, MBA
Associate Project Director
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Boulevard
Oakbrook Terrace, IL 60181
Phone: 630.792.5931
Individuals from The Joint Commission and CDC who will be involved with analyzing the data:
Barbara I Braun, PhD
Associate Director
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630-792-5928
Salome Chitavi, PhD
Project Director
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Phone: 630.792.5977
SChitavi@jointcommission.org
Stephen Schmaltz, PhD
Associate Director, Senior Biostatistician
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630-792-5243
Cheedah Phoutharath, MPH
Clinical Data Analyst
Division of Healthcare Quality Evaluation
The Joint Commission
One Renaissance Blvd
Oakbrook Terrace IL 60181
Phone: 630-792-5954
CPhoutharath@jointcommission.org
Nimia
L. Reyes, MD, MPH (CDC Project Officer)
Medical Officer
Centers for Disease Control and Prevention (CDC)
National
Center on Birth Defects and Developmental Disabilities
Division
of Blood Disorders
1600 Clifton Road (MS S106-3)
Atlanta,
GA 30333
Phone: 404-498-6733
Karon
Abe, PhD (CDC Co-Principal Investigator)
Centers for Disease
Control and Prevention (CDC)
National
Center on Birth Defects and Developmental Disabilities
Division
of Blood Disorders
1600 Clifton Road
Atlanta, GA 30333
Phone: 404-498-2498
Reference:
(2019). AHA Hospital Statistics: A comprehensive reference for analysis and comparison of hospital trends (2019 ed.). American Hospital Association.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | fvs1 |
File Modified | 0000-00-00 |
File Created | 2021-03-29 |