Ryan White HIV/AIDS Program Modeling Study
Paperwork Reduction Act Supporting Statement Part B
June 14, 2013
Department of Health and Human Services
Office of the Chief Information Officer
Office of Resources Management
200 Independence Avenue, S.W. 537-H
Washington, DC 20201
CONTENTS
S
B. Collection of Information Employing Statistical Methods
1. Respondent Universe and Sampling Methods
2. Procedures for the Collection of Information
3. Methods to Maximize Response Rates and Deal with Nonresponse
4. Tests of Procedures or Methods to Be Undertaken
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
Appendix A: Relevant sections of the 2010 patient protection and Affordable Care Act
Appendix B: Ryan White HIV/AIDS Program Modeling Study 60-Day Federal Register Notice
Appendix C: public comments from external consultants
Appendix D: responses to comments from external reviewers
Appendix E: Advance letter requesting participation in interview for the aspe study
Appendix F: ryan white Part a/b grantee Interview guide
Appendix G: ryan white provider interview guide
TABLES
B.1. Sampling Plan for RWHAP Grantee and Provider Semi-Structured Interviews
B.2. Precision Estimates
B.3. Data Collection Schedule
Supporting Statement part B for
RYAN WHITE HIV/AIDS PROGRAM Modeling Study
B. Collection of Information Employing Statistical Methods
The respondent universe for the semi-structured telephone interviews will be the program managers and administrators of the Ryan White HIV/AIDS Program (RWHAP) Part A municipal and Part B state grants and the administrators of medical provider organizations receiving Parts A, B, C, D, F, and/or Minority AIDS Initiative (MAI) funding.
The contractor will use data from the U.S. Department of Health and Human Services (DHHS) Health Resources and Services Administration (HRSA) 2009 RWHAP Data Reports (RDRs) and the most current HRSA lists of RWHAP grantees and providers to create the sample frame. While all states are Ryan White Part B grantees, only 25 states and the District of Columbia have Ryan White Part A grantees. Under the Ryan White statute, Part A grants are awarded to emerging metropolitan areas (EMAs) and transitional grant areas (TGAs) that have a population of at least 50,000 people and have reported at least 2,000 HIV/AIDS cases in the most recent five years (in the case of EMAs), or have reported 1,000 to 1,999 new HIV/AIDS cases in the most recent five years (in the case of TGAs). States with multiple metropolitan areas with a large HIV prevalence may have more than one Part A grantee (such as California, New York, and Florida). In some cases, a Part A grantee may span more than one state (such as the Boston EMA and the St. Louis EMA). All 51 of the Part B state grantees (including the District of Columbia) will be selected for telephone interviews. In addition, we plan to select at random, with equal probability, one Part A grantee within each of the 26 states/District with at least one Part A grantee. We plan to select a probability sample of 164 RWHAP-funded provider organizations to obtain 133 interviews.
Before sampling the provider organizations, we will exclude any providers with fewer than 10 HIV-positive patients. Out of the 2,009 providers listed in the RWHAP Data Report (RDR), 200 had fewer than 10 HIV-positive patients, leaving 1,809 provider organizations in the sample frame. From the three VA facilities in the population (one in the Northeast and two in the South), we plan to select two facilities to interview (one in each region) because of the large number of people living with HIV/AIDS (PLWHAs) whom they serve. We will then select a sequential random sample of 162 Ryan White Part A, B, C, and D provider organizations across all states, explicitly stratifying and proportionally allocating the sample across type of organization (health department, hospital- or university-based clinic, community-based organization, public community health center, mental health or substance abuse center, and othera) and Census region (Northeast, Midwest, South, and West), after sorting them within stratum by grant type indicators (Parts A, B, C, and/or D). Sorting by organization type before sampling is a way of implicitly stratifying to ensure the sample of grantees looks like all the population of grantees in the United States with respect to these characteristics within stratum. This will ensure that we will interview a range of organizational types among the sampled grantees across the country. Organization type is an important provider-level variable because some types of organizations, such as public community health centers, are likely to be differentially affected by the Affordable Care Act.
Table B.1 shows the total number of grantees and providers by organization type and Census region based on the 2009 RDR data. The expected distribution of the completed interviews will follow the distribution shown in the table. For example, there are 143 providers associated with hospital- or university-based clinics in the Northeastern United States, which represents 7.9 percent of all providers meeting our study criteria (143 out of a total frame of 1,809 providers). Therefore, we expect our participating sample to contain 10 hospital- or university-based providers in the United States (7.9 percent of 133 sampled providers).
Table B.1. Sampling Plan for RWHAP Grantee and Provider Semi-Structured Interviews
Grant or Provider Organization |
Northeast |
Midwest |
South |
West |
All |
||||
Part A and B Administrative Grantees |
Population of Grantees |
||||||||
Part A Grantee |
12 |
7 |
20 |
14 |
53 |
||||
Part B Grantee |
9 |
12 |
17 |
13 |
51 |
||||
Total Grantees |
21 |
19 |
37 |
27 |
104 |
||||
Part A and B Administrative Grantees |
Expected Number of Completed Grantee Interviews |
||||||||
Part A Granteea |
5 |
6 |
9 |
6 |
26 |
||||
Part B Grantee |
9 |
12 |
17 |
13 |
51 |
||||
Total Grantees |
14 |
18 |
26 |
19 |
77 |
||||
Part A, B, C, or D Providers |
Population of Providers (with 10 or more HIV patients) |
||||||||
Health Department |
20 |
38 |
105 |
86 |
249 |
||||
Hospital- or University-Based Clinic |
143 |
54 |
98 |
53 |
348 |
||||
Community-Based Organization |
284 |
96 |
255 |
177 |
812 |
||||
Public Community Health Center |
76 |
26 |
61 |
51 |
214 |
||||
Mental Health or Substance Abuse Center |
22 |
7 |
19 |
17 |
65 |
||||
Department of Veterans Affairs Facility |
1 |
0 |
2 |
0 |
3 |
||||
Other |
31 |
19 |
47 |
21 |
118 |
||||
Total Providers |
577 |
240 |
587 |
405 |
1,809 |
||||
Part A, B, C, or D Providers |
Expected Number of Completed Provider Interviews |
||||||||
Health Department |
1 |
3 |
8 |
6 |
18 |
||||
Hospital or University-Based Clinic |
10 |
4 |
7 |
4 |
25 |
||||
Community-Based Organization |
21 |
7 |
18 |
13 |
59 |
||||
Public Community Health Center |
6 |
2 |
4 |
4 |
16 |
||||
Mental Health or Substance Abuse Center |
2 |
1 |
1 |
1 |
5 |
||||
Department of Veterans Affairs Facility |
1 |
0 |
1 |
0 |
2 |
||||
Other |
2 |
1 |
3 |
2 |
8 |
||||
Total Providers |
43 |
18 |
42 |
30 |
133 |
||||
a States with multiple metropolitan areas with a large HIV prevalence may have more than one Part A grantee, and a single Part A grant may also span more than one state. Only 25 states and the District of Columbia have Part A grants. As described in the sampling methods above, we will select at random, with equal probability, one Part A grantee within each of the 26 states/District with at least one Part A grant.
We will construct weights to account for variation in the probability of selection and variation in the cooperation rates of those selected. These weights will allow the findings from the sampled and responding grantees and providers to be generalizable to the full population. We will construct three sets of weights for analytic purposes, one for each of the following sample member types: (1) Part A grantees, (2) Part B grantees, and (3) providers. The inverse of the probability of selection is the sampling weight. For the Part B grantees, this factor is equal to 1, as all such grantees are selected. For all other sample members, this factor weights up the selected grantees and providers to their respective populations. We expect a response rate of 100 percent for municipal and state grantees (Part A and Part B), but we will adjust their sampling weights as needed for any nonresponse encountered, using a simple weighting cell adjustment approach. We expect a response rate of 80 percent or higher for the providers and will adjust their sampling weights for nonresponse using a response propensity modeling approach based on provider characteristics such as facility type, Census region, and provider size. These nonresponse weighting adjustments allow the responding sample to represent the selected sample.
Primary data collected for this study will consist of qualitative data derived from semi-structured interviews made up of open-ended responses. These data will be coded and entered into Atlas.ti, a qualitative analysis application, enabling some quantitative estimates and comparisons between subgroups (for example, by type of respondent). The remainder of this section discusses the precision of the estimates based on these coded responses.
We will contact all Part B state grantees and expect all to respond. For the Part B state grantees, this will be a census of grantees, so sampling error is not an issue for results from that part of the data collection effort. For the Part A municipal grantees, we will select 26 out of 53 grantees (one in every state with a Part A grant) and expect all, or nearly all, to respond. For providers, we expect to complete 133 interviews out of 166 sampled for an 80 percent response rate, representing a population of 1,809. We do not plan to employ a finite population correction factor in any of our variance calculations, as this would limit the generalizability of the findings. For the Part A grantees, the selection of one grantee per stratum (state) raises a problem with the calculation of the within-stratum component of the variance, which requires at least two observations per stratum. For analytic purposes, we propose to collapse strata based on size (number of Part A grantees).
We present the expected precision of estimates in terms of the half-width of a 95 percent confidence interval around a proportion. For the Part A grantees, this precision accounts for the design effect of 1.93 due to the differential probabilities of selection across states, which range from 0.11 (1 of 9 in the largest state) to 1.00 (1 of 1 in 17 states). For a proportional outcome, our sample size of 26 Part A grantees will yield confidence intervals no larger than plus or minus 0.27. Except for the two VA providers, the providers will all have the same probability of selection and, therefore, no design effect due to differential sampling rates. But we do expect differential response patterns resulting in variation in the nonresponse-adjusted weights for the providers. We assume a design effect equal to 1.25 for our calculations here. Our expected sample size of 133 providers will yield confidence intervals no larger than plus or minus 0.10. Table B.2 displays the precision estimates.
Table B.2. Precision Estimates
Grantee Types |
Sample Size |
Design Effect |
Maximum 95 Percent Half Confidence Interval for a Proportion |
Part A Grantees |
26 |
1.93 |
±0.27 |
Providers |
133 |
1.25 |
±0.10 |
If we were to compare two approximately equal-sized subgroups among the 133 providers (for example, providers in nonmedical community-based organizations to all other types of providers), we would be able to detect a true underlying difference in a proportional outcome of .27 or greater with .80 power and a type I error rate of .05 (two-sided test). If we were to compare a one-quarter subgroup to a three-quarters subgroup (for example, providers in hospital- or university-based clinics to all other types of providers), we would be able to detect a minimum difference of .31 with the same power. It should be noted that these differences are rather large, and unlikely to pertain to the types of differences in coded responses between groups we might expect here. However, the main purpose of this data collection effort is to provide qualitative results.
No specialized sampling procedures will be used to accommodate unusual problems.
Data will be collected only once.
Interviewers will be responsible for obtaining participants’ agreement to complete the semi-structured telephone interview. Interviewers will be given extensive training and reference materials about data collection, and that material will include a discussion about gaining respondent cooperation. We expect that all or nearly all grantees will participate in this data collection effort, and that at least 80 percent of providers will respond. The research team has achieved high response rates for projects with similar interviewing tasks and target respondents. For example, several of the interviewers on this project achieved a 98 percent response rate using a comparable interview methodology on a Robert Wood Johnson Foundation evaluation project.
The primary strategy for achieving this response rate will be to underscore with sampled grantees the benefits of their participation. We will send sampled grantees an advance letter from the Deputy Assistant Secretary for Health Policy on the Assistant Secretary for Planning and Evaluation (ASPE)’s letterhead (Appendix E) in which we will stress that this interview is their opportunity to provide input on issues relating to the future of RWHAP under the Affordable Care Act. One week after the advance letter is sent, interviewers will start calling respondents to schedule interviews. If interviewers are unable to reach respondents during an initial phone call, interviewers will send a follow-up e-mail right away. Our experience with this population has shown that e-mail is a more efficient way to reach these respondents; the e-mail will repeat what is in the advance letter and will provide a callback number and e-mail address for respondents to reach the interviewer. Interviewers may make as many as six follow-up phone calls at different times of the day and during different days of the week and send as many as four follow-up e-mails throughout the data collection period. In week three of the data collection period, interviewers will debrief to identify successful techniques that are being used to schedule interviews. For example, if one interviewer has been more successful than others at scheduling interviews, she will share any refusal techniques—including agreeing to schedule an interview in the early morning or early evening or reassuring respondents that their responses will be unidentifiable—so that others can adopt these techniques. Table B.3 lists the data collection activity schedule and expected number of completed interviews by week of data collection.
Table B.3. Data Collection Schedule
Week of Data Collection |
Data Collection Activity |
Number of Completed Interviews |
1 |
Mail advance letter to respondents |
|
2–3 |
Start calling respondents to schedule telephone interviews; send follow-up e-mails to respondents who were unreachable by phone |
15 |
4 |
Continue calling and sending emails to respondents; interviewers debrief on call/e-mail effort |
30 |
5–7 |
Continue calling and sending e-mails to respondents |
75 |
8 |
Continue calling and sending e-mails to respondents; interviewers debrief |
90 |
9–11 |
Continue calling and sending e-mails to respondents |
135 |
12 |
Continue calling and sending e-mails to respondents; interviewers debrief |
150 |
13–14 |
Continue calling and sending e-mails to respondents |
180 |
15 |
Continue calling; final follow-up e-mail |
195 |
16 |
Final call attempts; field period ends |
210 |
Research staff responsible for conducting the telephone interviews will be trained on administration of the interview, including processes for ensuring participant privacy. The training will cover the advance letter, interview discussion guides, rationale behind specific questions, types of responses expected for open-ended questions, expected length of each segment of the discussion guide, responses to frequently asked questions, and whether and how to provide additional information if a respondent does not understand or misunderstands a question. The letter will include a confidentiality statement assuring respondents that none of their comments will be attributable to them individually or to their organization, that their participation is voluntary, and that their decision to participate will have no impact on their RWHAP funding. Respondents will also be assured that data collected during the interview will be analyzed and reported in aggregate and will not be identifiable at the individual or organizational levels. Interviewers will provide contact information, in case the respondent has any subsequent questions or concerns. During the interview, the interviewers will take notes on paper interview guides, and these data will be treated as sensitive documents at each stage of the process, from data collection to data entry by the research team; respondents’ names will not be written on the interview guides.
Previous interview guides were reviewed to help us identify the structure and types of questions that have been used successfully with similar populations. With that as our starting point, all questions included in these draft interview guides were developed specifically for this study. We will pretest the interview guides with one volunteer for each type of RWHAP grantee (one Part A and one Part B grantee) and with a mix of providers except the VA (one public community health center and one other clinic) for a total of less than nine respondents. The pretest will mirror the data collection strategy planned for the main data collection to the extent possible. Interviewers will keep track of time in order to verify if the burden estimates are correct. We will refine the interview guides based on feedback from the pretest.
The following people have contributed to the study design and to the design of the interview guides:
Ms. Adelle Simmons, senior program analyst and contracting officer’s technical representative at the Office of Health Policy, Office of the Assistant Secretary for Planning and Evaluation in the U.S. Department of Health and Human Services, (202) 690-5924
Dr. Boyd Gilman, senior health researcher at Mathematica Policy Research and project director, (617) 301-8974
Dr. Margaret Hargreaves, senior health researcher at Mathematica Policy Research and qualitative data collection lead on the project, (617) 301-8994
Dr. Karen Bogen, senior survey researcher at Mathematica Policy Research and consultant on the qualitative data collection task, (617) 674-8355
Ms. Barbara Carlson, senior statistician and associate director of statistical services and lead statistician on project, (617) 674-8372
Ms. Melanie Au, researcher at Mathematica Policy Research and interviewer, (202) 264-3459
Ms. Jung Kim, researcher at Mathematica Policy Research and interviewer, (609) 936-3253
Ms. Cicely Thomas, research analyst at Mathematica Policy Research and interviewer, (609) 936-3265
Ms. Vanessa Oddo, research analyst at Mathematica Policy Research and interviewer, (617) 715-6934
a Several organization types identified in the RDR data had low numbers of provider organizations. These organization types have been condensed into the “other” category, which includes private practices, PLWHA coalitions, agencies for multiple fee-for-service providers, and other facilities.
| File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
| Author | LocalAdmin |
| File Modified | 0000-00-00 |
| File Created | 2021-01-29 |