Measure & Instrument Development and Support (MIDS) Contractor:
Impact Assessment of CMS
Quality and Efficiency Measures
Supporting Statement B:
OMB/PRA Submission Materials for
Nursing Home National Provider Survey
Contract Number: HHSM-500-2013-13007I
Task Order: HHSM-500-T0002
Deliverable Number: 35
Submitted: October 1, 2014
Revised: November 10, 2015
Noni Bodkin, Contracting Officer’s Representative (COR)
HHS/CMS/OA/CCSQ/QMVIG
7500 Security Boulevard, Mailstop S3-02-01
Baltimore, MD 21244-1850
TABLE OF ContentS
1. Respondent Universe and Sampling Methods 1
2. Procedures for Collection of Information 8
3. Methods to Maximize Response Rates and Deal with Non-Response 9
4. Tests of Procedures or Methods to Be Undertaken 10
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 11
Data
collection
for the Nursing Home National Provider Survey
The semi-structured interviews and standardized survey will sample from the universe of nursing homes submitting data to the Nursing Home Quality Initiative (NHQI) program in 2015 to construct the sample frame for data collection. The sampling approach will support the following analytic objectives:
To make national estimates of the prevalence of the actions that nursing homes report taking in response to the CMS measures (e.g., hiring quality improvement staff or implementing clinical decision support tools within their health information technology systems);
To make subgroup estimates (i.e., by quality performance and by nursing home size) of the prevalence of the actions that nursing homes report taking; and
To examine the correlates of quality performance (i.e., the association between the actions that nursing homes report taking and quality performance).
Sampling Frame and Distribution of Nursing Homes by Size and Performance. The sample frame will consist of approximately 15,000 nursing homes. We will randomly draw a sample of 2,045 nursing homes from this universe, with the goal of achieving 900 responses (assuming an estimated 44% response rate). The sampling approach does not add sample beyond the original 2,045 nursing homes to achieve 900 completes should response rates fall below 44%; budget constraints preclude following this type of approach. A review of prior surveys of nursing home leaders indicates that an expected 44% response rate is a reasonable assumption. We will use multiple modes of outreach to respondents to achieve this response rate. We tried to be conservative in selecting a target response rate, and the data collection strategy relies on multiple modes and outreach strategies to ensure we achieve a 44% response rate.
Our
sampling approach relies on stratification of the nursing home
population using
nursing home characteristics that are of the
greatest importance to the proposed analyses. Stratification serves
two purposes:
To facilitate analyses that examine nursing homes within the resulting strata and/or compare providers across the strata.
To ensure that there is a sufficient number of nursing homes within the various strata so that the aforementioned analyses can be performed reliably.
To create improvements in power in analyses of the correlates of quality performance, achieved by increasing the variance of quality by oversampling high and low performers.
The random sample of nursing homes will be stratified by nursing home quality performance rating on the Nursing Home Quality Initiative composite quality score (categorized as high performance: five stars; medium performance: three or four stars; and poor performance: one or two stars) and bed size (categorized as small: < 75 beds; medium: 75–149 beds; and large: > 149 beds). Stratifying by quality performance is needed to help the Centers for Medicare & Medicaid Services (CMS) understand what differentiates nursing homes that are able to achieve high performance from those that achieve low performance. Stratifying by nursing home size will help CMS understand how responses differ between facilities with potentially different levels of resources. We will categorize nursing homes into nine sample strata that result from the interaction of these two characteristics. Table 1 shows the number of nursing homes within the universe that fall into each of the nine strata.
Table 1: Universe of Nursing Homes by Strata for Standardized Survey
|
Small (1–74 beds) |
Medium (75–149 beds) |
Large (> 149 beds) |
5-star quality rating* |
1656 nursing homes
|
2,165 nursing homes
|
706 nursing homes
|
3- or 4-star quality rating |
1,761 nursing homes |
3,231 nursing homes |
1130 nursing homes
|
1- or 2-star quality rating |
1,426 nursing homes |
2,616 nursing homes |
787 nursing homes |
*Based on CMS Nursing Home Compare quality measure Star Rating
Sampling Design for Standardized Survey. We propose equal sampling by size and quality strata. We aim to achieve 900 completed survey responses. We will select 227 nursing homes from each of the nine performance-by-size strata, with the goal of obtaining 100 completes per strata (assuming a 44% response rate). We selected this sampling design to satisfy the following objectives:
Estimates derived using all respondents or only respondents within specific strata have adequate precision.
Analyses that compare nursing homes across strata should have sufficient power.
Table 2 shows the distribution of the sampled (and responding) nursing homes across the nine strata.
Table 2: Sample Allocation (n = 2,045 total) by Strata for Standardized Survey
|
Small (1–74 beds) |
Medium (75-149 beds) |
Large (> 149 beds) |
5-star quality rating* |
• 227 mailed • 100 complete responses
|
• 227 mailed • 100 complete responses |
• 227 mailed • 100 complete responses |
3- or 4-star quality rating |
• 227 mailed • 100 complete responses |
• 227 mailed • 100 complete responses |
• 227 mailed • 100 complete responses |
1- or 2-star quality rating |
• 227 mailed • 100 complete responses |
• 227 mailed • 100 complete responses |
• 227 mailed • 100 complete responses |
*Based on CMS Nursing Home Compare quality measure star rating
As noted previously, the sample will be weighted to account for differential sampling probabilities. Using the design weights (and the assumed 44% response rate), we approximate that the effective sample size for national estimates will be 743 (with a design effect of 1.21). We conservatively estimate the level of precision of our national estimates and for estimates by nursing home quality and size strata for a survey item with a prevalence of 50%, so the standard error estimates below are upper bounds. A national estimate would be obtained with a standard error of 1.8 percentage points or less.
Note that these calculations do not incorporate adjustments that will have to be made in the event that response rates differ across strata. We will have reasonable precision for estimates within one-way strata. For example, an estimate of an item that is 50% prevalent across all high performing nursing homes will have a standard error of 3.11 percentage points (using an effective sample size of 259); an analogous estimate for large nursing homes will have an error of 2.95 percentage points (using an effective sample size of 287). Lastly, we will have reasonable precision for the strata defined using both size and performance. For example, an estimate of an item that is 50% prevalent across all low performing, large nursing homes will have a standard error of 5.0 percentage points (from an the effective sample size of 100).
We will compare subgroups using Cohen’s d, which is the ratio of the difference in means for the outcome variable between the two groups being compared and the standard deviation of the outcome variable. Values of Cohen’s d near 0.2 are considered small, 0.5 medium, and 0.8 large (Cohen, 1988). We will have 80 percent power to detect small differences (Cohen’s d = 0.25) between low, medium, and high performers and small, medium, and large providers using an α = 0.05 level two-sided test. In addition, with this sample design, we will have 80% power to detect medium differences (Cohen’s d = 0.40) when comparing the more refined strata that are defined using both size and performance. For example, we might compare responses from high performance, small size providers with those from high performance, large size providers.
Sensitivity of results to the response rate assumption: We performed additional power calculations to assess how a lower response rate on the standardized survey might impact our ability to examine differences between subgroups. In the computations presented below, the minimum detectable effect size (MDES) when comparing high- vs. low-performing nursing homes (or small vs. large) is 0.23 with a 44% response rate and 0.305 with a 25% response rate (assuming 80% power to detect a difference). As the calculations below indicate, when we reduce the response rate, the effect sizes are small to moderate in our ability to detect differences between subgroups.
We illustrate these power calculations using a hypothetical survey question: Has your nursing home implemented electronic tools to support frontline clinical staff, such as clinical decision support, condition-specific electronic alerts, or automated prompts?). If 90% of high performing nursing homes have electronic tools, then with a 0.25 minimum detectable effect size, we would be able to detect an 8.7 percentage point difference between high- and low-performing nursing homes (i.e., 90% for high-performing vs. 81.3% for low-performing nursing homes). We would not have sufficient power to detect something smaller differences (i.e., the 5 percentage point difference that would result if 85% of low performing nursing homes were using electronic tools). If the response rate were 25% (leading to a MDES of 0.305), we would be able to detect a difference as small as 10.9 percentage points (i.e., 90% vs. 79.1%).
Power calculations for comparison of high-performing nursing homes to low-performing nursing homes:
Response Rate |
Power1+ |
Power2# |
MDES* |
0.25 |
0.558 |
0.921 |
0.332 |
0.35 |
0.704 |
0.979 |
0.281 |
0.44 |
0.800 |
0.994 |
0.250 |
+ Power1 – The power for which an effect size of 0.23 can be detected
# Power2 – The power for which an effect size of 0.4 can be detected
*Assumes 80% power
If we were to compare high-performing large nursing homes to low-performing small nursing homes, the power calculations would be as follows:
Response Rate |
Power1+ |
Power2# |
MDES* |
0.25 |
0.260 |
0.560 |
0.530 |
0.35 |
0.346 |
0.707 |
0.447 |
0.44 |
0.420 |
0.803 |
0.398 |
Consideration of alternative sampling strategies: Other strategies for sampling were considered as alternatives to the one selected above. First, we considered the option of drawing a simple random sample that would yield 900 respondents from the entire population (this is equivalent to sampling from each stratum at a rate that is proportional to the size of the stratum). Such a strategy was deemed to not yield sufficient size for the various strata. Specifically, such a strategy would involve a (multiplicative) 37.2% increase in the standard error of estimates calculated across the subpopulation of large nursing homes over that which is yielded by our preferred strategy. Although proportional sampling will yield standard error for national estimators that are 9.1% smaller than those yielded by our preferred strategy, we prefer the gain in precision for the within stratum approximations (note that standard errors for national estimators are already quite small).
Sampling Design for Semi-Structured Interview. The semi-structured interview will employ purposive sampling to interview 40 nursing home quality leaders across the nine sample strata. The nursing homes completing the semi-structured interview will not be the same as those completing the standardized survey; they are distinct samples. Sampling 40 nursing homes across nine strata will result in as few as four and as many as five interviews per stratum. This distribution is outlined in Table 3. Because these data are qualitative, the goal is not to generalize to the larger population, but rather to conduct a sufficient number of interviews per stratum to complement the quantitative data collected in the standardized survey and to provide qualitative details that can help partially explain what we observe in the quantitative results from the standardized survey. We will release sufficient sample for recruitment and scheduling to achieve the target number of completed interviews.
Table 3: Sample Allocation (n=40 total) by Strata for Semi-Structured Interview
|
Small (1–74 beds) |
Medium (75–149 beds) |
Large (>149 beds) |
5-star quality rating* |
4 completed interviews |
5 completed interviews |
5 completed interviews |
3- or 4-star quality rating |
5 completed interviews |
5 completed interviews |
4 completed interviews |
1- or 2-star quality rating |
4 completed interviews |
4 completed interviews |
4 completed interviews |
* Based on CMS Nursing Home Compare quality measure star rating
Questionnaire Content and Design Process. The content of the survey was driven by the five research questions of interest to CMS:
Are there unintended consequences associated with implementation of CMS quality measures?
Are there barriers to providers in implementing CMS quality measures?
Is the collection and reporting of performance measure results associated with changes in provider behavior (i.e., what specific changes are providers making in response?)?
What factors are associated with changes in performance over time?
What characteristics differentiate high- and low-performing providers?
Attachment I, “Development of Two National Provider Surveys,” details the process used to develop and test the survey instruments. This included an environmental scan of the literature related to the five research questions, formative interviews with nursing homes, drafting of survey instruments and testing the draft instruments with nursing homes, and receiving input from the Technical Expert Panel and Federal Advisory Steering Committee (FASC), composed of representatives from various federal agencies (e.g., AHRQ, CDC, HRSA, ASPE). In addition, we conducted formative interviews with nursing homes to assess whether the survey domains were of importance to nursing homes and to identify other issues or topics not identified through the environmental scan. The formative interview work with nursing homes was also useful in defining the structure of the survey and in identifying topics that would be more conducive to standardized questions vs. questions that are open-ended in nature.
The goals of the formative interview work were to explore:
How the CMS performance measures are changing the way in which nursing home are delivering care
Factors that drive nursing home investments in performance improvement
Issues nursing homes face related to reporting the CMS measures
Potential undesired effects associated with the measures, and
Challenges nursing homes face related to improvement on the CMS measures.
By exploring these topics, we were able to develop survey questions that addressed the research questions. Attachment III crosswalks the survey questions from the semi-structured interview protocol and the structured survey to the research questions listed above. Attachment III also displays how the goals of the formative work map to the research questions.
The survey development team considered including a “don’t know” option for all questions; however, the final surveys include a “don’t know” response only for those items where the survey development team thought it was necessary. The reason is that we are concerned about increasing item “missingness,” as respondents often default to the “don’t know” option rather than finding the answer within their organization. The results of our limited testing of the instrument revealed that respondents did not generally state they did not know the answers to various questions.
There is also potential concern about positive response bias when fielding surveys. However, in our formative and cognitive testing work, respondents demonstrated variation in how they responded and were willing to report negative things, such as upcoding of data. During interviews they expressed frustration with the measurement programs and having to collect and report the data and described challenges with being able to improve their performance as well as undesired behaviors. As such, we do not believe the surveys as designed will lead to positive response bias among respondents.
Plan for Tabulating the Results. The analysis plan will include (1) development of sampling weights, (2) response rate/nonresponse analyses, (3) psychometric evaluation of survey items, (4) development of national and subgroup estimates (such as by level of performance and size of nursing home where possible), and (5) analyses of the association between nursing home performance (high/low) and nursing home responses and characteristics. All aspects of these analyses will be described in a final project report to CMS.
Weighting. Three types of weights will be considered to allow our analysis of survey responses to appropriately reflect the target populations of interest: sampling weights, nonresponse weights, and post-stratification weights. Sampling weights reflect the probability that each nursing home is selected for the survey; nonresponse weights reflect the probability that a sampled nursing home responds to the survey; post-stratification weights make the respondent sample’s characteristics similar to those of the population. Sampling weights are readily calculated as the ratio of eligible to sampled nursing homes in particular strata (given the proposed stratified sampling design). Complex nursing home-level nonresponse or post-stratification weights may be developed using logistic regression and raking/log linear models, respectively, in consultation with CMS.
Response rate/nonresponse analyses. We will examine response rates overall and within particular strata, including by performance on CMS quality and efficiency measures and by nursing home size (number of beds). Logistic regression analyses will be used to examine the associations between known nursing home characteristics and probability of nonresponse. Nursing home characteristics to be included in this analysis are size (e.g., number of beds), for-profit/nonprofit status, urban/rural, region, and socioeconomic characteristics of patient population.
Psychometric evaluation of survey items. We will evaluate missing data, item distribution (including ceiling and floor effects), internal consistency, and reliability. We will compute these statistics overall and by strata.
Subgroup estimates. We will produce national and subgroup estimates with appropriate adjustment to account for sampling design and nonresponse. The types of subgroups that are of interest include performance strata (low, medium, high), nursing home size (e.g., number of beds), socioeconomic status of patients, and urban/rural. The final list will be determined in consultation with CMS.
Relationship between survey response patterns and nursing home characteristics. We will provide descriptive analyses of survey findings overall and stratified by nursing home characteristics. The descriptive statistics will include the mean and median response, variation in responses, and skewness of responses by item. We will use linear and logistic regressions to examine the association between survey responses and nursing home characteristics, including nursing home performance, size, and region. We aim to develop two main analyses. First, we will use univariate analyses to examine associations between performance and nursing home characteristics, including characteristics obtained from the survey and characteristics obtained from administrative data sources such as practice size and location/region. Second, multivariate regression analyses will be used to examine associations between performance and unintended consequences, barriers to reporting and improvement (e.g., reporting difficulties with reporting data or EHR use), drivers of improvement, and changes to improve care delivery, adjusting for potential confounding factors identified in the initial univariate analyses. Results from these analyses will allow us to determine the fraction of variation in performance that can be explained by information obtained from the survey. In addition, it may be appropriate to treat survey responses as the response variable for certain analyses. For example, increased self-reported overtreatment may be stimulated in environments where high performance is encouraged, making it useful to examine whether high performance is associated with higher rates of unintended consequences. Therefore, in consultation with CMS, we will consider such additional analyses that investigate survey responses as the response variable and performance as an independent variable.
The first step in fielding both the semi-structured interviews and the standardized survey will be to identify the most appropriate respondent for these data collection activities, to whom we refer as the quality leader for the organization—that is, the individual within the organization who is most familiar with the CMS performance measures and the lead actions and quality improvement activities the organization has undertaken to improve performance in response to these measures. Once we have drawn the sample, we will contact each nursing home to identify the quality leader.
We understand the potential concern about ensuring the individuals identified at nursing homes are equivalent. To ensure that survey and interview respondents are comparable across facilities, we will call each sampled nursing home to identify the correct respondent—the person who is knowledgeable about CMS quality measures and the actions the nursing home has taken to respond to these measures—to whom we will address the survey. Although this individual often carries the title of chief nursing officer, we purposefully did not identify the nursing home leader using this specific title because the exact title may vary between facilities. We used this strategy during formative interviewing and cognitive testing, and we were able to identify a quality leader within each organization. During the interviews, these individuals demonstrated that they possessed the knowledge necessary to address the questions on the survey. The types of responses we obtained in survey development were comparable across nursing homes, and the individuals did not demonstrate problems providing answers to the questions (see Attachment I, “Development of Two National Provider Surveys,” which summarizes findings from the formative interview work).
Semi-structured Interview. RAND staff will search the Internet and contact nursing homes to compile the name, job title, mailing address, email address, and telephone extension of the nursing home quality leader. Using this information, RAND will then send the nursing home quality leader a letter via email that describes the study and interview and invites the nursing home leader or a designee to take part in the interview. RAND data collection staff will follow up by phone 3 to 5 days after the invitation letter is emailed to confirm interest and availability in participating in the interview. To minimize non-response bias, we will make up to 10 attempts, both by phone and via email, to contact the quality leaders to encourage them to participate in the interview. We will schedule an appointment for the interview at a date and time that is convenient for the quality leader and as necessary will work with each nursing home leader’s administrative assistant to schedule interview appointments; in previous survey work, we have found this protocol to be effective at reducing non-response.
Standardized Survey. Vendor data collection staff will contact each sampled nursing home to confirm the name, job title, mailing address, email address, and telephone extension of the nursing home quality leader. This will allow us to personalize survey invitations. To promote the likelihood of survey participation, we plan a multi-mode data collection for the nursing home quality leader survey. We will employ Web, mail and telephone as data collection or prompting modes. To allow adequate time for each mode and for USPS delivery of mail survey returns, we have planned for a field period of 9–12 weeks.
As recommended as best practice by Dillman, we propose to contact non-responders using varying modes, including modes different to the data collection modes.1
Weeks 1-3 – Initial and follow up email invitations to complete the survey by Web
All nursing home leaders will receive a maximum of two invitations to participate in the survey via the Web. These invitations will be sent via email 1 week apart and will contain sufficient information for informed consent as well as a nursing home-specific personal identification number (PIN) code that allows access to the Web survey for that nursing home. If no email address is available, the invitations will be sent via first class mail.
Week 4 – Mail survey is sent to all non-responding quality leaders. To reduce non-response rates, 4 weeks after the initial invitation to the Web survey, non-responding nursing home leaders will receive a paper version of the survey via first class mail.
Week 7 – Commence phone calls to non-responding quality leaders to prompt return of the mail survey or completion of the Web survey. Seven weeks after the initial invitation to the Web survey, non-responding nursing home leaders will be contacted by telephone to prompt completion of the survey via Web or return of the mailed survey via fax. Note that in order to minimize data collection costs related to engaging large numbers of nursing home leaders by telephone, we initially contact non-responders by email or by mail and reserve the more expensive phone outreach until later in the data collection period, when the number of non-responders is smaller. We anticipate close of the field after 12 weeks of data collection.
Throughout data collection, we will track response and cooperation within each sample stratum and employ additional efforts or sample to achieve sufficient response in each stratum. We anticipate the procedures outlined above and the goal of 900 completed surveys will result in a response rate of 40% to 60%.
Semi-Structured Interview. We will maximize response to the semi-structured interview by conducting the interview at a day and time within the field period that is most convenient for the nursing home quality leader. In addition, 3 to 5 days after the invitation letter is emailed, RAND data collection staff will follow up by phone to confirm interest and availability in participating in the interview. For those nursing home leaders who are willing to participate, we will make up to 10 attempts to schedule an appointment for the interview both by phone and via email in order to minimize non-response bias. We would also work with each nursing home leader’s administrative assistant to get the interview scheduled; in previous survey work, we have found this protocol to be effective at reducing non-response. The nursing home quality leader may designate another individual within the organization to participate in the interview, which may further maximize participation. Those who refuse participation in the interview or who fail to respond to the invitations altogether will be replaced with a nursing home quality leader from another nursing home with the same characteristics. During the formative development work, we generally found nursing homes willing to participate in the interviews, as they wanted to share their experiences with the CMS measures and what they are doing to improve their performance on these measures.
Standardized Survey. Published surveys of nursing leaders (administrators and directors of nursing) conducted in the past 10 years report response rates of 48% to 57% (Castle, 2006; Mukamel, et al., 2007). In addition, surveys of organizations and/or individuals in leadership roles have experienced an overall decline in response rates similar to surveys of general populations (Cycyota and Harrison, 2006; Baruch and Holtam, 2008). We used these studies and the previous experience of the survey development team in conducting interviews and surveys with nursing homes to arrive at our estimate of a 44% response rate.
As described in Section 2 above, we plan to maximize response rates for the standardized survey through:
Careful identification of the appropriate respondent,
Use of personalization,
Multiple attempts,
Multiple modes of survey administration, and
Alternative modes for non-response contacts.
We anticipate the data collection procedures for the structured survey will result in a response rate of 44% to achieve 900 completed surveys. We will track both facility characteristics and titles of nursing home leaders among non-responding nursing homes to better adjust for non-response in analyses of results, to examine possible response bias, and to describe the characteristics of non-responders.
The data collection protocol and draft semi-structured interview guide and draft standardized survey were developed and tested with a small number of providers (please refer to Attachment I of the clearance package). Findings from the formative interviews and cognitive testing helped to determine the structure of the semi-structured interview protocol and the standardized survey and the approach that would need to be used to identify the appropriate respondent(s) to the survey in the provider organization.
Formative interviews were used to guide the development of the structured survey and semi-structured interview protocol. Nine nursing homes participated in the formative interviews, which were conducted by telephone. The formative interviews with nursing homes were designed to:
Assess whether providers could understand the information we sought to collect to address the five research questions,
Assess whether providers would provide biased (i.e., only favorable) responses with regard to CMS programs or their actions taken in response to performance measurement
Explore the language potential respondents might use to describe the topics, and
Identify potential response options or areas to probe.
Nursing homes included in the formative interviews were purposively sampled to represent variation in the size of the provider entity, the region of the country and location (urban vs. rural) of the provider, and performance on CMS measures. Additionally, nursing home providers were sampled on the basis of whether they had a relationship with a hospital (i.e., hospital-based nursing homes). The individuals interviewed were senior leaders who were responsible for the overall quality and safety of clinical care within the nursing home. Interviewees were asked to provide feedback on lessons learned related to the use of the performance measures and on any other concerns not covered in the semi-structured interview guide.
The draft standardized survey was tested with nine nursing homes via cognitive interviews conducted by telephone. A range of nursing homes (size, quality performance, and region) were selected for the cognitive interviews to capture variation in the expected range of responses. The cognitive interviews were designed to assess respondents’ understanding of the draft survey items and key concepts and to identify problematic terms, items, or response options. During this time, the draft instruments were also reviewed by the RAND and Health Services Advisory Group (HSAG) project teams, a technical expert panel convened by HSAG, and the Federal Advisory Steering Committee. The draft survey was revised based on findings from the cognitive interviews and feedback received from the various reviewers to produce the final version of the nursing home survey to be used in 2016.
The survey, sampling approach, and data collection procedures were designed by the RAND Corporation under contract to HSAG under the leadership of:
Cheryl Damberg, PhD Kanaka Shetty, MD
RAND Corporation RAND Corporation
1776 Main Street 1776 Main Street
Santa Monica, CA 90407 Santa Monica, CA 90407
Key input to the statistical aspects of the design was received from the following individuals:
Cheryl Damberg, RAND Project Director
Kanaka Shetty, RAND Co-Project Director
Layla Parast, Statistician
Michael Robbins, Statistician
Marc Elliott, Senior Statistician
The semi-structured interview data will be collected by RAND; the standardized survey data will be collected by a survey vendor.
Cohen J. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale: Lawrence Erlbaum; 1988.
Castle N. Measuring staff turnover in nursing homes. The Gerontologist. 2006;46(2):210–219.
Cycyota CS, Harrison DA. What (not) to expect when surveying executives: a meta-analysis of top manager response rates and techniques over time. Organizational Research Methods. 2006;9:133–160.
Baruch Y, Holton BC. Survey response rate levels and trends in organizational research. Human Relations. 2008;61:1139–1160.
Mukamel DB, Spector WD, Zinn JS, Huang L, Weimer DL, Dozier A. Nursing homes’ response to the Nursing Home Compare report card. Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 2007;62(4):S218–S225.
1 Dillman, D.A., Smyth, J.D., Christian, L.M., Dillman, D.A. (2009). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, N.J: Wiley & Sons.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Subject | Passback3 |
Author | Julie Brown |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |