Rural Community Wealth and HealthCare Provision
Pilot Phase Report: Health Care Provider Survey
USDA Economic Research Service and
Iowa State University Survey & Behavioral Research Services
February 2015
The following report is in reference to:
ICR Reference #201310-0536-001
OMB Control #0536-0072
OMB Action Date 01/23/2014
The Survey on Rural Community Wealth and Health Care Provision (SRCWHCP) is under the direction of John Pender, Senior Economist with the USDA Economic Research Service (ERS) and principal investigator (PI) for the project. Data collection is conducted through a cooperative agreement with Iowa State University’s Center for Survey Statistics & Methodology (CSSM) and Survey & Behavioral Research Services (SBRS).
The primary purpose of the study is to provide information about how rural small towns can attract and retain health care providers, considering the broad range of assets and amenities that may attract providers. The secondary purpose is to provide information on how improving health care may affect economic development prospects of rural small towns. ERS seeks to address these purposes by obtaining input from community leaders (key informants) and primary health care providers in 150 sampled rural small towns and by conducting secondary analysis of existing health and economic indicators.
A pilot study with 12 of the 150 communities is being conducted and must be reviewed and approved by OMB prior to implementation of the study in the remaining 138 communities, per the Notice of Office of Management and Budget Action. The first pilot study component consisted of semi-structured telephone interviews with Key Informants. The Key Informant component was completed in the 12 pilot communities and approved by OMB; and it is currently in process for the remaining 138 communities. The second pilot study component consists of mail/web surveys with Health Care Providers. This component has now been completed in the 12 pilot communities and the results are described in this report.
Pilot Health Care Provider Survey Procedure
A list of primary health care providers for each community was developed by SBRS using National Provider Identifier listings and online sources. This list included three categories of providers: (1) primary health care physicians as defined by Medicare, (2) dentists, and (3) physician’s assistants, nurse practitioners, and certified nurse midwives (PA/NP/MW). The accuracy of each list was subsequently verified by key informants interviewed in the first pilot study component. The provider sampling frame development process was described in the “Pilot Phase Report: Key Informant Semi-Structured Interviews” and approved by the OMB.
The stated intent of the Pilot Health Care Provider Survey was to obtain a maximum of 8 completed mail/web provider surveys from each community. The OMB stipulates a survey response rate goal of 80%. As a result, SBRS sampled a maximum of 10 providers from the frame of each pilot community. In communities with 10 or fewer providers listed, all were included. In communities with more providers, 10 were chosen proportionately from the three categories of providers using a stratified random sample. Sampling was required in only four of the 12 pilot communities. One community had no providers at all. Three communities each had 4 providers in the frame, one community had 6, two communities had 8, and one had 10 providers. The remaining four communities had larger numbers available so 10 were sampled from each one, proportionately by the three provider categories. The final provider sample for the pilot included 84 providers. The size of the provider frame and sample for each of the pilot communities appears in Table 1 along with survey outcomes.
Table 1. Health Care Provider Frame and Sample by Pilot Town
Pilot Town |
Region1 |
Hospital |
Providers in Frame |
Providers Sampled |
Provider Sample |
|||
Not Eligible |
Refused |
No Response |
Complete |
|||||
1007 |
LMD |
No |
4 |
4 |
0 |
0 |
4 |
0 |
1008 |
LMD |
Yes |
10 |
10 |
0 |
0 |
8 |
2 |
1015 |
LMD |
No |
8 |
8 |
0 |
0 |
6 |
2 |
1039 |
LMD |
Yes |
16 |
10 |
0 |
0 |
9 |
1 |
2041 |
UMW |
No |
4 |
4 |
0 |
0 |
3 |
1 |
2067 |
UMW |
No |
4 |
4 |
2 |
0 |
1 |
1 |
2088 |
UMW |
Yes |
33 |
10 |
1 |
0 |
7 |
2 |
2094 |
UMW |
Yes |
27 |
10 |
0 |
0 |
6 |
4 |
3110 |
SGP |
No |
8 |
8 |
2 |
1 |
4 |
1 |
3113 |
SGP |
Yes |
6 |
6 |
0 |
0 |
2 |
4 |
3125 |
SGP |
Yes |
36 |
10 |
0 |
0 |
8 |
2 |
3130 |
SGP |
No |
0 |
0 |
0 |
0 |
0 |
0 |
TOTALS: |
156 |
84 |
5 |
1 |
58 |
20 |
1Regions include UMW (Upper Midwest – IA, MN, WI), SGP (Southern Great Plains – KS, OK, TX), and LMD (Lower Mississippi Delta – AR, LA, MS).
The Health Care Provider Survey was primarily a paper survey to be distributed by mail, since the mailing addresses for sampled providers were available but not email addresses. However, the survey was also made available online for those who preferred a web format.
In keeping with the protocol described in the ICR, SBRS staff mailed a survey packet including a 2-sided cover letter, project brochure, survey, and return postage-paid envelope to the sampled providers in each of the 12 pilot towns. The cover letter gave instructions for accessing the survey online. The approved protocol included sending a second packet and also making a follow-up phone call to non-responders. In order to identify the most effective sequencing of those two follow-up procedures, SBRS divided the sample in half; half of the non-responders received a second survey packet and subsequently a follow-up phone call, while the other half of non-responders received a follow-up phone call and then the second survey packet.
SBRS staff monitored the progress of both mail and web survey data collection and completed follow-up phone calls as required based on the split-sample procedure. Completed paper surveys were received, checked, and key entered. An Excel data file was compiled with all data from both mail and web surveys.
Provider Pilot Study Outcomes
The Health Care Provider Survey experienced mixed results. The follow-up contact protocol experiment indicated that it was more helpful operationally to follow-up the first survey packet with a phone call and then, if appropriate, a second survey packet. The follow-up calls identified three incorrect addresses requiring a re-mailed survey as well as two ineligible providers who should not have been included in the sample. Identifying these issues prior to the second survey mailing increased operational efficiency, although only slightly.
The greatest difficulty encountered with the follow-up phone call process was the challenge presented by office staff “gatekeepers.” Providers were typically with patients and unavailable to talk on the phone. Because SBRS staffs were unable to speak with the providers, the follow-up calls were not a particularly effective means of encouraging survey participation. Of the three surveys re-sent to new addresses, for example, none were completed and returned.
Responses in the Pilot Study
Survey response was less than anticipated. Responses by pilot town appear in Table 1, along with region, hospital status, and size of the provider frame. Twenty surveys were completed, including 19 on paper and one online. One of the pilot towns had no health care providers at all, and there were no provider surveys completed from one town that had 4 providers in the sample. Among the remaining 10 pilot towns, 1 survey was completed in four towns, 2 surveys were completed in four other towns, and 4 surveys were completed in each of the two remaining towns.
Five providers were classified as ineligible. Two of them returned a blank survey, indicating that they were retired. Follow-up phone calls identified two other providers who had retired or moved away and one who was unknown to people at the hospital and clinic, also classified as ineligible. There was one refusal, and there was no response from the remaining 58 sampled providers.
After removing the 5 ineligible cases from the sample, the final response was 20 surveys completed out of 79, or an overall response rate of 25.3%. This is well below the response rate goal of 80% described in the ICR. Proposed changes designed to increase the response rate and to increase the total number of completed surveys are described in the Proposed Changes section below.
SBRS staff also reviewed the survey response by provider type, as shown in Table 2. The sample of 84 providers included 38 physicians, 20 dentists, and 26 in the third category (PA/NP/MW). Response numbers are quite small, especially when reviewed by provider category; but the response rate for physicians was 21.6% (8 out of 37 eligible), for dentists 38.9% (7 out of 18 eligible), and for the PA/NP/MW category 20.8% (5 out of 24 eligible).
Table 2. Health Care Provider Frame, Sample, and Survey Outcomes by Provider Category
Provider Category |
Providers in Frame |
Providers Sampled |
Provider Sample |
|||
Ineligible |
Refused |
No Response |
Complete |
|||
Physicians |
72 |
38 |
1 |
0 |
29 |
8 |
Dentists |
36 |
20 |
2 |
1 |
10 |
7 |
PA/NP/MW1 |
48 |
26 |
2 |
0 |
19 |
5 |
TOTALS: |
156 |
84 |
5 |
1 |
58 |
20 |
1The PA/NP/MW category includes Physician’s Assistants, Nurse Practitioners, and Certified Nurse Midwives.
SBRS staff examined the data from the 20 completed surveys for completeness. Item non-response was minimal, with less than 1.2% of the data missing. Two open-ended items were skipped by two people. Three 6-part question series were missed, two of them by one person each and the third was skipped by two people. The remaining item non-response consisted of four single-response questions, each omitted by one person. There did not seem to be a pattern or any identifiable reason for the missing responses. The survey questions with item non-response are listed in Table 3.
Table 3. Provider Survey Item Non-response
Item # |
Question Text |
# Responses Missing |
Q7 |
Did you spend any part of your residency, an internship, or externship in a rural area or a small town? (<20,000 pop.) 1 = Yes 2 = No |
1 |
Q17 |
Do you have adequate professional coverage for your practice while you are on vacation? 1 = Yes 2 = No |
1 |
Q20a-f |
What did the recruitment entail? (Select all that apply) a. Information provided by community (e.g., brochures, lists of services, etc.) b. Site visit for myself arranged by community c. Site visit for my spouse/children arranged by community d. Site visit for myself arranged by employer e. Site visit for my spouse/children arranged by employer f. Other (Please describe) |
1 |
Q21s |
How important to you were each of the following factors in your decision to practice in this community? s. Quality of the medical community Not Important Neutral Very Important 1 2 3 4 5 |
1 |
Q32a-f |
In your opinion, are the changes in the availability of health care in this community due to any of the following reasons? 1 = Yes 2 = No 3 = Don’t Know a. Changes in health care facilities or equipment b. Changes in health care professionals c. Changes in health facility administration/ownership d. Changes in government policies/programs e. Changes in the health insurance industry f. Changes in the local economy or business community |
2 |
Q33a-f |
In your opinion, are the changes in the quality of health care in this community due to any of the following reasons? 1 = Yes 2 = No 3 = Don’t Know a. Changes in health care facilities or equipment b. Changes in health care professionals c. Changes in health facility administration/ownership d. Changes in government policies/programs e. Changes in the health insurance industry f. Changes in the local economy or business community |
1 |
Q36 |
In general, what would you say is the most important factor in successfully recruiting or retaining health care providers in your town? (open text response) |
2 |
Q37 |
In general, what would you say is the greatest difficulty in recruiting or retaining health care providers in your town? (open text response) |
2 |
Q48 |
What is your ethnicity? 1 = Hispanic 2 = Not Hispanic |
1 |
De-identified data for the 20 provider respondents was delivered to the ERS PI, who determined that the information obtained from the provider surveys was on target with project goals. There were no indications in the pilot study that either the survey questions or the formatting should be revised.
Respondent Burden in the Pilot Study
The estimated burden on sampled pilot providers is shown in Table 4, which includes the total number of primary health care providers contacted for the mail/web survey, the outcomes, and the burden in minutes. The pilot provided no indication that the burden estimate per case for non-response or for completing the mail/web survey should be revised. The estimate allows 15 minutes for each sampled provider to read and review the survey materials and decide whether or not to participate and an additional 15 minutes to complete the survey, either online or on paper. The resulting total burden for the Pilot Study’s Health Care Provider Survey is 1560 minutes, or 26 hours.
Table 4. Health Care Provider Survey Outcome Totals and Burden
Outcomes |
Number |
Average Minutes per Case |
Total Minutes |
Total Hours |
Not Eligible |
5 |
15 |
75 |
1.25 |
Refused |
1 |
15 |
15 |
.25 |
No Response |
58 |
15 |
870 |
14.50 |
Completed Surveys |
20 |
301 |
600 |
10.00 |
TOTAL |
84 |
|
1560 |
26.00 |
1The 30 minute time allocation for each Completed Survey includes 15 minutes for reviewing the request and reading enclosed materials and 15 minutes for completing the survey.
If the provider sampling procedures do not change, the average number of providers per community matches that of the pilot study, and the same average burden is required in the remaining 138 communities, the total burden for the health care provider surveys would be 325 hours (26 hours for 12 communities x 150/12). This is less than half of the total burden estimated in the ICR for the health care provider surveys (675 hours). This is because the burden estimate in the ICR conservatively used the maximum number of provider respondents for each community.
Proposed Changes
There are no recommended changes to the Health Care Provider Survey instrument. The Pilot Study indicates that the survey questions and formatting are sound and will provide the information needed to meet project goals, as long as a sufficient number of completed surveys are obtained.
The number of completed surveys is a function of the sample size and the response rate. Development of the provider sample frame through secondary data and completion of most of the key informant interviews results in a current estimate of 2,047 primary health care providers in the 150 study communities.1 Using the original sampling design, which limits the maximum number of sample providers to 10 per community, results in an estimated sample size of 1,131 providers; less than the sample size of 1,500 estimated in the ICR.2 Furthermore, the pilot study, with its 25.3% response rate, has demonstrated that obtaining an 80% response rate from health care providers is not likely to be possible. If the same response rate is achieved in the full study, we can expect to obtain about 286 completed surveys, rather than the 1,200 estimated in the ICR; raising concerns about the statistical power to achieve the study objectives. As a result, the ERS PI and SBRS, in consultation with the project statistician, propose both an increase in the sample size and a data collection protocol change designed to increase the survey response rate, which will combine to generate a greater number of completed surveys.
Data Collection Protocol Change
As described above, the Pilot Study demonstrated a need to expand efforts to increase health care provider response rates. The follow-up phone calls have not proven to be particularly effective in increasing survey participation because SBRS staffs are generally only able to speak with office staff, not the providers. With limited options available, the ERS PI and SBRS staff propose incorporating an email follow-up promoting the web survey option.
The current provider frame includes very few email addresses, but the ERS PI and SBRS propose using the follow-up phone call process to obtain or verify provider email addresses. Email reminders can then be sent directly to those providers for whom an email address is identified. The follow-up emails will include a personalized link to the survey to facilitate easy and quick access. Even if providers choose not to complete the online survey, the email will comprise another reminder to complete and return the paper survey. It is anticipated that the resulting increase in response rate will not be large, particularly since email addresses will not be obtainable for all providers; but it is projected to increase from 25.3% to 30%.
Sampling Change
For the remaining 138 project communities, the ERS PI and SBRS propose increasing the maximum number of sampled providers per community from 10 to 32. This proposal is based on the 25.3% response rate found in the pilot study. If a similar response rate were to apply to the remaining communities, the expected maximum number of respondents per community would be 8, as originally proposed in the ICR. As shown below, this change will provide sufficient statistical power to confidently answer the study questions.
As of the date of this report, the community frames of health care providers have been verified with at least one individual in 111 of the 138 remaining communities and tentative frames have been compiled for the additional 27 communities. A histogram depicting the number of providers in the remaining 138 project communities is shown in Figure 1.
Figure 1. Number of Providers in Sampled Communities
Of the remaining 138 communities, 16 have no providers, which is a somewhat greater proportion than in the Pilot Study (where one out of 12 had no providers). In addition, 51 communities have from 1 to 10 providers, 42 have 11 to 20, 17 have 21 to 32, and 12 have 33 to 56 providers. Increasing the maximum number of sampled providers per community from 10 to 32 would thus require a sampling process in 12 of the remaining 138 communities. A census of providers would be conducted in the remaining communities. This would result in a total project sample of 1859 providers from all 150 communities, based on the current frame status.
Using the pilot study’s 25.3% response rate, 470 completed surveys would be obtained from a sample of 1859 providers. If the overall response rate increases to 30% as estimated with the proposed protocol change, 558 completed surveys could be obtained. This is still significantly less than the maximum of 1,200 completed surveys described in the ICR, but it will provide nearly double the number of surveys obtained if no changes are made to either the sample size or the data collection protocol. This change will provide greater flexibility in the analysis of the survey responses by subgroups, such as by region, hospital status, or provider types.
The proposed changes to the data collection protocol and the sampling procedure are listed in Table 5.
Table 5. Proposed Health Care Provider Survey Changes
Data Collection Protocol Change: |
|
Current Status |
Contact sampled providers by mail (up to 2 copies of the survey packet) with a follow-up phone call. The current response rate is 25.3%. |
Proposed Change |
Contact sampled providers by mail (up to 2 copies of the survey packet) with a follow-up phone call to request or verify provider email addresses. Send a follow-up email with a link to the web survey, where email addresses are available. The anticipated response rate following this procedure is 30%. |
Sampling Change: |
|
Current Status |
Sample up to 10 providers per community (where available). Based on the Pilot Study and the current status of the provider frame, this will result in a total sample of approximately 1131 providers (84 Pilot Study, 1047 Main Study). |
Proposed Change |
Sample up to 32 providers per community (where available). Based on the Pilot Study and the current status of the provider frame, this will result in a total sample of approximately 1859 providers (84 Pilot Study, 1775 Main Study). |
Impact of Pilot Results and Proposed Changes on SRCWHCP
Impact of Pilot Results and Proposed Changes on Estimated Statistical Power
It is essential that the results of these proposed changes will still demonstrate sufficient statistical power to effectively address the project’s research questions as described in the ICR. Power analysis procedures indicate that this is indeed the case.
The Iowa State University statistician for this project (Cindy Yu) estimated the statistical power to detect an effect size of either 0.05 or 0.10 for a binary response variable (an effect size of 0.10 was used in the power analysis presented in the ICR Supporting Statement Section B, January 2014), using the proposed sampling approach, the current version of the population frame of health care providers in the study communities, and the response rates and responses to selected questions found in the pilot study.3 A subset of the questions in the survey were used for the power analysis – questions Q21d, Q21e, Q21g, Q21k, and Q25. These questions were selected because of their critical importance to the objectives of the study;4 and because the responses observed in the pilot study suggested that the variance of estimated means would be largest for these questions.5 Questions Q21d, Q21e, Q21g, and Q21k all involved an ordinal response to questions about how important a particular factor (“good place to raise a family”, “quality of schools”, “recreational opportunities”, and “friendliness of the people”, respectively) was to the respondent in deciding to practice in the study community. The response values ranged from 1 (meaning “not important”) to 5 (meaning “very important”). For the power analysis reported in Table 6, these responses were simplified to a binary response of 1 if the respondent indicated 4 or 5 for the factor (“important” or “very important”), and 0 if the respondent indicated 1, 2, or 3 (“not important” to “neutral”).6 Responses to Question Q25 (“Have you ever seriously considered moving and practicing in a different location?”) are already in binary form.
Table 6 shows the estimated power to detect an effect size of 0.10 with the proposed sampling approach is above 90% in all cases, and close to 100% for four of the five selected questions. This indicates that the proposed sampling approach has high statistical power to detect an effect size of 0.1 in these questions. The power is much lower for most of these questions (except Q21g and Q25) for an effect size of 0.05. Thus, although more precise estimates will be possible for some questions, we do not expect substantial improvement upon the ability to detect an effect size of 0.10 in many cases. This is still consistent with the initial design of the study.
The
main reason we are able to obtain adequate statistical power, despite
obtaining a smaller sample than originally envisioned, is that there
is sufficient independent variation across responses within each
community so that the intracluster correlation is low. Our proposed
increase in the maximum sample size per community also contributes to
the statistical power.
Table 6. Power to Detect an Effect Size of 0.05 or 0.10 for Selected Questions in the Provider Survey
Survey question |
Effect size = 0.05 |
Effect size = 0.10 |
Q21d (“Good place to raise a family” is important) |
0.492 |
0.973 |
Q21e (“Quality of schools” is important) |
0.585 |
0.992 |
Q21g (“Recreational opportunities” are important) |
0.930 |
1.000 |
Q21k (“Friendliness of the people” is important) |
0.379 |
0.910 |
Q25 (“Seriously considered moving practice”) |
0.958 |
1.000 |
Addressing Concerns about Possible Non-response Bias
The low response rate achieved in the pilot study and the proposed increase of the maximum number of providers sampled in each community suggest that non-response bias could be a concern in analyzing the data. As indicated in the ICR Section B, we will investigate the potential for non-response bias using data in the sample frame and from other secondary sources. The non-response bias study will test whether there are quantitatively and statistically significant differences in response rates for health providers of different types (i.e., physicians vs. dentists vs. mid-level providers), in different regions, in communities with vs. without a hospital, and by other characteristics that may affect the opportunity costs of the providers’ time or their propensity to cooperate in a survey (e.g., the degree of rurality of the community). If no quantitatively and statistically significant differences in response rates are found across these characteristics, we will conclude that we have no evidence of non-response bias (although that would not prove there is no bias, as that is not possible). If we do find significant differences in response rates that are associated with some observed characteristics, the next step will be to investigate whether those characteristics are associated with differences in response variables in the survey, such as how important various community factors are in provider recruitment or retention. Even if response rates differ in ways associated with particular observed characteristics, if differences in those characteristics are not associated with different responses to the survey, there is no reason to be concerned about non-response bias. If we do find that response rates and survey responses differ in ways associated with particular observed characteristics, we will correct for non-response bias by incorporating a propensity score into the estimator for mean responses, which accounts for the different probabilities of different providers to respond to the survey. The propensity score estimates will be based on the observed factors found to be associated with both the providers’ propensity to respond to the survey and with the survey responses.
Impact of Proposed Changes on Respondent Burden
The ERS PI and SBRS have calculated the anticipated respondent burden that would result if the above changes are implemented for the remainder of the project. If the provider sampling procedure is changed to include a maximum of 32 providers per community, and if the email protocol is implemented with the result of a 30% response rate, the total estimated burden will still be slightly less than the amount described in the ICR.
A comparison of currently approved and proposed revision for estimated burden is shown in Table 7, including both the actual burden for the Pilot Study and the anticipated burden for the main study assuming the proposed changes are implemented. The only anticipated change to burden minutes per case is a result of adding the email component to the data collection protocol. While email addresses will not be available for all providers in the remaining 138 communities, the revised burden estimate was increased by 2 minutes each for both respondents and non-respondents.
The increased sample size and 30% response rate would impact both the number of non-respondents and respondents. It is anticipated that implementing the proposed changes would result in a total sample size of 1859 with about 558 completed surveys. The Pilot Study produced 20 completed surveys from a sample of 84. The Main Study will produce about 538 surveys from a sample of about 1775. For the main study, i.e., the remaining 138 communities, the burden for each non-respondent is 17 minutes and the burden for each completed survey is 32 minutes. The total estimated burden in the Main Study for 538 completed surveys is 286.9 hours, while the burden for 1237 non-respondents is 350.5 hours. When combined with the Pilot Study, the total project burden is 663.4 hours. This is slightly less than the 675 hours stated in the ICR.
Table 7. Health Care Provider Survey Revised Burden Calculations
Study Phase |
Burden Hours Approved |
Burden Hours under Proposed Revisions for Main Study |
||||
Number of Providers |
Minutes |
Total |
Number of Providers |
Minutes |
Total |
|
Per Case |
Hours |
Per Case1 |
Hours |
|||
Pilot Study (12 Communities) Completed |
24 |
15 |
6 |
64 |
15 |
16 |
Non-Respondent Burden |
||||||
Respondent Burden (Completed Surveys) |
96 |
30 |
48 |
20 |
30 |
10 |
Main Study (138 Communities) Proposed |
276 |
15 |
69 |
1237 |
17 |
350.5 |
Non-Respondent Burden |
||||||
Respondent Burden (Completed Surveys) |
1104 |
30 |
552 |
538 |
32 |
286.9 |
TOTAL PROJECT SAMPLE |
1500 |
|
675 |
1859 |
|
663.4 |
1The
time allocation for each Completed Survey includes 15 minutes for
reviewing the request and reading
enclosed materials and 15
minutes for completing the survey. For the Main Study, 2 additional
minutes are
included due to incorporating an email reminder.
Summary
The pilot study results indicate that the survey questions are working effectively and are expected to provide the data needed for analysis. The overall procedure specified in the ICR for the Health Care Provider component is basically sound. However, the response rate is significantly less than anticipated and two changes are proposed to help increase both the number of completed surveys and the overall response rate. The proposed changes are (1) to increase the maximum number of providers per community in the sample from 10 to 32, and (2) to increase the response rate by adding an email follow-up component that will serve as a reminder and provide easy access to the online survey for providers. Implementing these two changes will result in a final data set with the necessary statistical power for analysis and will maintain an estimated burden below that described in the ICR. Concerns about possible non-response bias will be addressed by testing for significant differences in response rates and survey responses across observed characteristics of the providers and their communities, and if differences are found, correcting for these differences in the estimation using propensity scores.
1 This estimate is subject to change as the key informant interviews are completed in all study communities.
2 The sample size estimate in the ICR assumed that 10 providers would be contacted in every community; not accounting for communities with fewer than 10 providers.
3 The power estimates presented here are based on the mean response rate found in each of the three study regions, rather than the aggregate response rate across all communities in all regions. There were insufficient observations in some of the six study strata (defined by region x whether the community has a hospital) to use separate response rates for each stratum.
4 All of the sub-questions of Q21 are critical to the objective of understanding how important different community factors are in attracting health care providers to rural communities, and Q25 is critical for understanding whether rural health providers intend to keep practicing in the communities studied.
5 This observation was based on the mean response observed in the pilot study. As shown in the power analysis in the ICR, Section B, a mean response close to 0.5 for a binary response variable yields a larger variance and lower power than a mean response further from 0.5. This observation is based on holding the intracluster correlation constant, which can vary across the different questions. However, we did not consider variations in the intracluster correlation in selecting which questions to use for this power analysis.
6 The statistician also analyzed the power for a binary response equal to 1 if the respondent indicated 5, and 0 otherwise (i.e., whether the respondent considered the factor to be “very important”). In all cases, the power was greater for this type of binary response than when evaluating whether the respondent indicated 4 or 5. This is probably because the intra-community correlation of responses is greater for evaluating whether a factor is at least important in a community than when evaluating whether it is very important. In other words, providers in the same community differ more often in assessing whether a factor is very important than in assessing whether it is at least important (and possibly very important).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Larson, Janice M |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |