Supporting Statement B:
Development and Testing of an HIV Prevention Intervention
Targeting Black Bisexually-Active Men
New OMB Application
OMB No. 0920-XXXX
Contact information
Technical Monitor: Lisa Belcher, PhD
1600 Clifton Rd, NE, Mailstop E-37
Atlanta, GA 30333
Telephone: 404-639-5369
Fax: 404-639-1950
E-mail:
fcb2@cdc.gov
Respondent Universe
The respondents providing the information for the proposed project are Black men who have both male and female sexual partners who live in one of the study cities (Chicago, Los Angeles, or Philadelphia). In each study city, 240-260 Black men who report having both male and female sex partners in the past 12 months will be recruited.
CSU: In addition to the eligibility criteria above, this site will be recruiting men with a recent incarceration history.
Overview of Sampling Method
There is no sample frame available for this study and the number of men who are MSMW in each city is not known. Further, it is well documented that this is a ‘hidden population” that will be difficult to reach through typical venue-based recruitment strategies. Consequently, the three sites will use a strategy similar to respondent driven sampling (RDS), which is described below.
The sample will be recruited using chain referral techniques based on principles of RDS, which is a technique that has been developed to approximate probability sampling for hidden populations. It makes use of chain referral techniques similar to snowball sampling, but in addition, makes use of information about referral probabilities and network size to develop a sampling frame and then weight the sample to more closely reflect population estimates (Heckathorn, 1997; Heckathorn, 2002). Like other forms of chain-referral or snowball sampling, RDS is based on the assumption that group members—in this case Black MSMW—can most efficiently recruit other group members, including those who might not be accessed in venue-based sampling. Although RDS has been shown to generate samples that are representative of IDU populations, experience using this technique with MSM populations has been mixed (Ramirez-Valles, Heckathorn et al. 2005; Abdul-Quader, Heckathorn et al. 2006). Because of challenges that have been experienced with RDS and because the purpose of this project is not to come up with population estimates but rather to pilot and evaluate novel behavioral interventions, RDS procedures will be modified for this project.
As described in greater detail below, project staff will recruit initial participants. Each seed will be asked to recruit up to five eligible participants from his social network. After these new recruits complete the baseline survey, they will also be asked to recruit up to 5 participants each. After this second wave of recruitment, the chain will be ended, and new chains started. Truncating the sampling chains will result in smaller clusters that can be randomly assigned to intervention and control conditions without greatly affecting the comparability of the two groups.
Sample Size Justification
The aim of the study is to determine the preliminary effectiveness of the three unique interventions. If these interventions show promising results, more rigorous evaluations will be conducted. The design and sample sizes reflect these considerations. Nonetheless, the sample size for each study is based on statistical power calculations. Further detail about statistical power is presented in Section 2.
Response Rates
As described above, there is no sampling frame available for this study, which precludes estimation of the number of eligible men in each city. However, we are able to estimate the eligibility and participation rates among men who are screened. The CARDS study was a cross-sectional survey conducted with 237 MSMW recruited through RDS in Philadelphia. Eligibility criteria were sex with a male and female in the past year and self-identifying as Black or White. The eligibility rate was 80%. We expect somewhat similar rates in the proposed study due to the similarity of the recruitment methods. Because this study has more stringent criteria, we have estimated the eligibility rate of those who contact the study to be screened to be 60% (by determining how many men in the CARDS study would have been eligible for the new study). We are also conducting follow-up assessments with respondents immediately upon completion of the intervention and again three months later (and at comparable time points for those in the control condition). We estimate 10% attrition for each time point, resulting in 80% retention at the last assessment. These estimates are also based on the CARDS study.
Sample Size Calculations
The primary outcome variable for the three studies will be participant self-report of frequency of unprotected sex in the past 90 days. The two secondary outcome variables will be: 1) any unprotected sex in the past 90 days (yes/no), and 2) number of sexual partners reported in the past 90 days. Each of these variables includes both male and female sex partners. Because the studies will use chain referral sampling and social networks will be randomly assigned to study conditions, sample size and analytic methods need to take into account correlation of observations within social network (intraclass correlation). Each of the proposed studies includes measurements at baseline and 3 months after the completion of the intervention. Two studies have an intervention and a control condition, and one study has two intervention groups and one control group.
To determine the number of groups (social networks) and members needed to have adequate power to detect an intervention effect, we needed to estimate: 1) the intraclass correlation (the magnitude of the correlation within social networks), 2) the over-time correlation at the member level (the correlation between a participant’s baseline level of risk behavior and his 3-month level of risk behavior), 3) the variance of the distribution of the outcome variable, and 4) the size of the intervention effect that is clinically important to be able to detect. In the tables below, we present calculations for the number of social networks needed for at least 80% power to detect differences of varying size between treatment and control at 3 months, adjusting for the baseline measurement as a covariate. We chose a critical alpha level of 0.10 rather than the conventional 0.05 because the purpose of these studies is to identify promising interventions for further study. Sample size estimates are based on a formula in Murray (1998) for group-randomized trials:
(1)
Where is the detectable difference (the smallest difference between study conditions that would be declared significant based on a critical level of 0.10 and power of at least 80%. In the numerator of the formula, is the member component of variance, is the adjustment to the member component of variance due to covariates (the baseline measurement), is the group component of variance, and and are critical values from the t-distribution. In the denominator of the formula, m is the number of members per group and g is the number of groups per condition. The intraclass correlation coefficient (ICC; at 3 months adjusted for baseline) can be computed as: . Variance components in the numerator of formula (1) can be derived using the ICC and the variance for the binomial distribution for any unprotected sex (yes/no) and the ICC and variance for the zero-inflated negative binomial distribution for frequency of unprotected sex and number of partners reported.
Table 1 presents descriptive statistics and intraclass correlations from the CARDS data. The CARDS study, conducted from December 2007-June 2008 in Philadelphia, PA, involved 237 Black and White MSMW recruited through respondent driven sampling. Eligibility criteria were sex with a male and female in the last year and self identifying as Black or White. Sample size calculations will be based on these estimates, with rates and means for the control condition based on the rates and means from the CARDS data.
Table 1. Means, standard deviations, ICCs and 95% confidence intervals for ICCs from CARDS data
Variable |
Mean |
Standard Deviation |
ICC |
Number of times had unprotected sex (past 90 days) |
9.898 |
20.323 |
.0399 |
Number of partners reported (past 90 days) |
6.495 |
7.912 |
0 |
Any unprotected sex (yes/no) |
.5949 |
.4916 |
0 |
Table 2 presents sample size calculations for number of unprotected acts reported for a 90-day period. We again varied the over-time correlation at the member level, the average number of members per group, the number of members per condition, and the difference between the control and intervention groups to determine the impact on the detectable difference.
Table 2. Number of groups and members per condition needed for at least 80% power to detect the specified difference between treatment and control conditions in frequency of unprotected sex during the past 90 days
ICC |
Over-time correlation (member) |
Mem per group |
Number of groups per condition |
Total Sample Size |
Total Sample Size if 20% LTF |
M1 |
M2 |
Absolute Detectable Difference |
% Reduction |
.05 |
.40 |
3 |
30 |
180 |
225 |
10 |
7.03 |
2.97 |
29.74 |
.05 |
.40 |
3 |
31 |
186 |
233 |
10 |
7.07 |
2.93 |
29.26 |
.05 |
.40 |
3 |
32 |
192 |
240 |
10 |
7.12 |
2.88 |
28.80 |
.05 |
.60 |
3 |
30 |
180 |
225 |
10 |
7.34 |
2.66 |
26.6 |
.05 |
.60 |
3 |
31 |
186 |
233 |
10 |
7.38 |
2.62 |
26.2 |
.05 |
.60 |
3 |
32 |
192 |
240 |
10 |
7.43 |
2.58 |
25.8 |
.05 |
.20 |
10 |
8 |
160 |
200 |
10 |
6.31 |
3.69 |
36.9 |
.05 |
.20 |
10 |
9 |
180 |
225 |
10 |
6.52 |
3.48 |
34.8 |
.05 |
.20 |
10 |
10 |
200 |
250 |
10 |
6.70 |
3.30 |
33.0 |
.05 |
.60 |
10 |
8 |
160 |
200 |
10 |
6.59 |
3.41 |
34.1 |
.05 |
.60 |
10 |
9 |
180 |
225 |
10 |
6.79 |
3.22 |
32.2 |
.05 |
.60 |
10 |
10 |
200 |
250 |
10 |
6.95 |
3.05 |
30.5 |
Note: ICC=intraclass correlation; M1=mean for frequency of unprotected sex in the control group; M2=mean for frequency of any unprotected sex in the treatment group; Absolute detectable difference= the smallest difference that would be declared statistically significant given the Type I and Type II error rates.
Table 3 presents sample size calculations for any unprotected sex, past 90 days. To demonstrate the impact of the assumptions made about the average number of members per group, the number of members per condition, and the difference between the control and intervention groups, we varied these parameters. The last two columns of the table present the absolute detectable difference and the reduction as a percentage. Because all participants will report unprotected sex at baseline (as a criterion for eligibility for the study), the baseline measurement will not be used in this analysis and the over-time correlation is not used in sample size formulae.
Table 3. Number of groups and members per condition needed for at least 80% power to detect the specified difference between treatment and control conditions in report of any unprotected sex in the past 90 days
ICC |
Members per group |
Number of groups per condition |
Total Sample Size |
Total Sample Size if 20% LTF |
P1 |
P2 |
Absolute Detectable Difference |
% Reduction |
.05 |
3 |
30 |
180 |
225 |
.600 |
.410 |
.190 |
31.7 |
.05 |
3 |
31 |
186 |
233 |
.600 |
.413 |
.187 |
31.2 |
.05 |
3 |
32 |
192 |
240 |
.600 |
.416 |
.184 |
30.7 |
.05 |
10 |
8 |
160 |
200 |
.600 |
.368 |
.232 |
38.7 |
.05 |
10 |
9 |
180 |
225 |
.600 |
.381 |
.219 |
36.5 |
.05 |
10 |
10 |
200 |
250 |
.600 |
.393 |
.207 |
34.5 |
Note: ICC=intraclass correlation; P1=proportion reporting any unprotected sex in the control group; P2=proportion reporting any unprotected sex in the treatment group; Absolute detectable difference= the smallest difference that would be declared statistically significant given the Type I and Type II error rates.
Table 4 presents sample size calculations for number of sexual partners reported for the past 90 days, in a format identical to the previous two tables.
Table 4. Number of groups and members per condition needed for at least 80% power to detect the specified difference between treatment and control conditions in number of sexual partners during past 90 days
ICC |
Over-time correlation (member) |
Mem per group |
Number of groups per condition |
Total Sample Size |
Total Sample Size if 20% LTF |
M1 |
M2 |
Absolute Detectable Difference |
% Reduction |
.05 |
.40 |
3 |
30 |
180 |
225 |
6 |
4.18 |
1.82 |
30.3 |
.05 |
.40 |
3 |
31 |
186 |
233 |
6 |
4.21 |
1.79 |
29.8 |
.05 |
.40 |
3 |
32 |
192 |
240 |
6 |
4.24 |
1.76 |
29.4 |
.05 |
.60 |
3 |
30 |
180 |
225 |
6 |
4.37 |
1.63 |
27.1 |
.05 |
.60 |
3 |
31 |
186 |
233 |
6 |
4.40 |
1.60 |
26.7 |
.05 |
.60 |
3 |
32 |
192 |
240 |
6 |
4.43 |
1.58 |
26.3 |
.05 |
.20 |
10 |
8 |
160 |
200 |
6 |
3.74 |
2.26 |
37.6 |
.05 |
.20 |
10 |
9 |
180 |
225 |
6 |
3.87 |
2.13 |
35.5 |
.05 |
.20 |
10 |
10 |
200 |
250 |
6 |
3.98 |
2.02 |
33.7 |
.05 |
.60 |
10 |
8 |
160 |
200 |
6 |
3.91 |
2.09 |
34.8 |
.05 |
.60 |
10 |
9 |
180 |
225 |
6 |
4.03 |
1.97 |
32.8 |
.05 |
.60 |
10 |
10 |
200 |
250 |
6 |
4.13 |
1.87 |
31.1 |
Note: ICC=intraclass correlation; M1=mean for number of partners in the control group; M2=mean for number of partners in the treatment group; Absolute detectable difference= the smallest difference that would be declared statistically significant given the Type I and Type II error rates.
Procedures for the Collection of Information
Training for Study Personnel
All study personnel will complete training on protecting human subjects. In addition, all data collection staff will receive an intensive training on interviewing procedures, protection of confidentiality and handling adverse situations. They will also participate in a detailed review of the content and intent of all interview questions and will practice asking the screening questions and the interviewer-administered portion of the questionnaire.
Recruitment Procedures
Consistent with RDS procedures, in each study site, samples will begin with the selection of an initial set of group members, commonly referred to as seeds, who will form the first wave of study participants. Sites will select approximately 6 seeds, diverse in age, for the first round of recruitment. Sites will make use of a range of strategies to locate and recruit seeds from various parts of the cities. First, seeds will be identified in consultation with community-based organizations with which the sites are collaborating. Second, sites will ask their respective Community Advisory Board members to assist in recruiting seeds from their personal networks. Seeds will be screened according to the procedures described below. After completing the project’s baseline data collection, he will receive a short training on recruiting other participants and be given coupons to recruit three to five additional participants. Participants recruited by seeds will be asked to identify and recruit the next wave of participants. The recruitment chain will end after the second wave of recruits and new seeds will be identified to start new chains. All men in a recruitment chain will be assigned to the same treatment condition.
All participant/recruiters will be asked to recruit only men that they know personally (such as a sexual partner, friend, relative, co-worker). People men ‘know’ are defined as individuals where the participant: 1) knows another individual’s name (and the other individual knows the participant); 2) knows how to contact that individual; and 3) have been in contact with the individual in the past 6 months. The individual can be a sexual partner or not. Study staff will not be directive in telling participants whom to recruit. In order to make it more difficult for recruits to falsify information to gain admission to the study, the precise eligibility criteria will not be revealed to participants. Participants are free to recruit anyone they know, so long as those individuals are Black, 18 years of age or older, male, a resident of the Philadelphia metropolitan area, and who may be sexually active with men and women. These points will also be included on a reminder card that will be given to all participants who are asked to recruit (See Attachment 3: Recruitment reminder card).
Capping the number of recruits prevents any single individual from dominating the recruitment process. This also minimizes the likelihood that participants will recruit strangers in order to receive the recruitment reimbursement (see below). RDS is based on the assumption that participants recruit others they know. This provides information on the social networks of the men, which can be used in the analysis by means of a spreadsheet that will keep track of seeds and the persons they recruit during the course of enrollment. The goal is to recruit chains of 1 to 13 men, with a mean size of 3 to 4 men. We will start by asking each participant to recruit 3 new participants. If we find that recruitment chains are too small, we will increase the number of recruitment cards given to each participant to 4 or 5, until the optimum chain size is reached.
Recruitment will continue until the target sample size is achieved. Study staff will alert participants near the end of data collection that if they plan to recruit others, their recruits must enroll in the study before the recruitment cut-off date. One possible complication with using chain referral to sample MSMW is that recruiters may not know if the men in their networks have sex with men and women. To avoid asking recruiters to question their friends about their sexual partners, we will encourage men to recruit members of their network, even if they are not sure they are eligible.
The project will keep track of who enrolls in the study and who is recruited by whom through the coupon system. Each coupon will be printed on medium card stock and include the following information:
A non-stigmatizing name of the study
Study location(s), phone number for calling to obtain additional information about the study, and how to be preliminarily screened for eligibility.
A unique serial number. This number will be entered in the recruitment-tracking data base and linked to the recruiting participant’s unique study ID number, thus providing a means for linking individuals. This information on who recruited whom can be used in conjunction with the survey data to examine the characteristics of social networks.
Participant eligibility will be determined when a potential participant contacts the study office (either via telephone or in-person). Study staff will assess each potential participant’s eligibility using the screening instrument (Appendix 3a). This instrument also includes several items that are not required for study eligibility to make it harder for the participant to determine exactly which criteria determine eligibility. Site specific screening procedures are described below.
Site Specific Screening Procedures
PHMC: Screening will only be conducted over the phone. Men with a recruitment ticket who are interested in participating will call the project’s toll-free phone number. Before asking any questions, the staff member conducting the screening will read a paragraph to the potential participant describing the research project and procedures, explain that participation is voluntary, and describe risks and confidentiality measures (Appendix 5: Screening script and questionnaire). The staff member will then ask if the potential participant is interested in continuing with the screening process. Because the screening is done over the telephone at PHMC, requested (and have been granted) a waiver of documentation of consent for the screening procedure (45 CFR 46.117(c)). When eligible participants come in for the baseline interview they will be given a written consent form and we will document informed consent at that time.
Nova: Potential participants will be screened for eligibility either in-person or over the phone. Men who call in for screening will be asked the screening questions over the phone by a staff member who will enter responses on a computer. Men who present for an in-person screening will be seated at an ACASI computer station where they will complete the screening questions themselves. Use of the ACASI allows for potential participants to have more privacy answering the questions rather than having recruiters ask questions and record their answers. Our recruiters will not be able to view or have access to the answers provided by the respondent. When the potential participant has completed the screening questions, the ACASI will bring up a screen saying “Thank you for answering the questions.” The software will automatically determine whether or not the respondent meets the eligibility requirements without staff input or assistance. When the respondent lets the staff member know that he has completed the screening process, staff will enter a special password to learn whether or not to invite the respondent to enroll in the study; staff will not know the reason for ineligibility if the potential participant is ineligible. If the man is not eligible, the staff person will thank him and give him a $3 McDonalds gift card and condoms to thank him for his time. All eligible men will be invited to enroll in the study immediately after completing the screener. Informed consent will be obtained from each participant following a thorough explanation of the study.
CSU: Potentially eligible inmates that participate in HIV prevention education services in jail will be informed of study eligibility requirements verbally by study staff and asked about their interest in participating. Private screening sessions will be conducted with men who express an interest in the study. Following screening, post-release contact information will be collected from eligible men to allow study staff to follow-up with them once they are released from jail. Re-contacted individuals will undergo eligibility screening again either by phone or in person. Jail-based recruitment will continue for the duration of the study.
CSU will also recruit subjects from the community using chain referral methods. Study participants who complete the immediate, post-intervention interview will be asked if they are willing to recruit individuals like themselves – recently incarcerated, bisexual African American men – for the study. Willing participants will be given coupons to pass on to their contacts that may be eligible for the study. Recruiting participants will receive three coupons to pass onto their social contacts. For interested potential participants (who are not incarcerated at the time of screening) CSU will conduct screening either in-person (at the study office) or over the phone. In these cases, study staff will ask potential participants for their coupon number to verify the referral source. After briefly describing the study, staff will request permission to conduct an eligibility screening. Men who screen eligible on the telephone will be scheduled to visit the study office for enrollment into the study. When men present for screening in-person, staff will thank the men for their time, explain the purpose of the study and explain the screening process. If the potential participant does not want to complete the screening questions, the recruiter will thank him for his time. For participants who screen over the phone, coupon numbers will first be verified. If the coupon number is valid, staff will thank the men for their time, explain the purpose of the study, and explain the screening process. Only men who present with a coupon will be screened for eligibility. The serial number of the coupon will be checked against information contained in the tracking data base to identify the participant responsible for the new recruit. If the individual screens eligible and enrolls in the study after informed consent, the coupon serial number will be entered into the tracking data base. Coupon numbers of men who call in but are not eligible are invalidated, to discourage men from trying to get into the study by changing their answers to the screening questions.
Overview of Data Collection Procedures
The intervention will be evaluated using a randomized control group design. Data collection will occur at three time-points: baseline, immediately post-intervention and three months post-intervention (and comparable time points for those assigned to the control condition) using audio computer-assisted self-interview (ACASI). These assessments will collect information on risk behaviors, demographics, and psychological and socio-cultural variables using multi-site measures supplemented by local measures. A portion of the local measures will use computer-assisted personal interview (CAPI) techniques.
The data manager will assign each participant to the intervention or control condition after the completion of the telephone screening process. For seeds, the data manager will use a computer-generated assignment list and will assign the seed to the condition that is next on the list. For participants who are recruited by other participants, the data manager will look up the condition assigned to the seed for that cluster and will assign the new participant to that condition. A card indicating the group assignment will be sealed in an envelope to be opened by the interviewer upon completion of the baseline data collection.
Enrollment and Baseline Assessment: Following screening, all eligible participants will be given a choice of locations for his study enrollment and baseline assessment. At this visit, informed consent will be administered at the beginning of the appointment (Attachment 4). During the consent process, participants will be informed about the 1 ½ hour time commitment to complete the baseline visit. In addition, a thorough description of study activities and attendant time commitments will be reviewed during the consent process. Once enrolled, each study participant will be assigned a unique study identification (ID) number. ID numbers will be assigned consecutively by the data manager. This ID number will start with the first letter of the city name and will be assigned consecutively, beginning with 100. The study ID number will be entered into the ACASI program for all assessments. The study ID number will also be used to link locator information and to identify participants who are due for a follow-up interview.
After obtaining informed consent, the interviewer will assist the participant in starting the self-administered ACASI. Before administering the survey, the interviewer will give the participant a short tutorial on using the computer to answer questions for the ACASI component. When the participant feels comfortable, the interviewer will start the ACASI program for the participant. The interviewer will be readily available in an adjacent room or area to help the participant with any computer questions or problems that arise. This procedure worked well in a previous CDC-funded study with a similar population where computer-assisted technology was used. The computer programming will check for out-of-range and inconsistent answers and will assist with skip patterns. Immediately following the baseline assessment, the interviewer will collect participant locator information.
Follow-up Assessments: In each study site, the intervention will be completed over a 6-8 week period. After the last session of the intervention, participants will be asked to complete an immediate follow-up assessment, which will focus on psycho-social factors and intentions to reduce sexual risk. The assessment will be scheduled within 3 weeks of the last intervention session and 7 to 9 weeks after the baseline interview for the control group. It will take approximately 30 minutes. Three months later, respondents will be asked to return to complete the final assessment, which will focus on psycho-social factors and sexual risk behavior. This assessment will take approximately 45 minutes. We will aim to schedule the 3-month follow-up visits up to one week before or two weeks after the target follow-up date (primary window). If participants are unable to complete the visit during the target period, they may be scheduled up to 6 weeks beyond the target date (secondary window). The Excel spreadsheet will be used to record and keep track of these data collection windows. If the last day of the secondary window period for the follow-up occurs on a weekend or holiday, the assessment will be completed the last working day before the end of that window. If a participant does not complete the visit before the end of the secondary window period, he will be considered “lost to follow-up” for that assessment point.
Prior to each assessment, the interviewer will give the participant a short tutorial on using the computer to answer questions for the ACASI component. When the participant feels comfortable, the interviewer will start the ACASI program. The interviewer will be readily available in an adjacent room or area to help the participant with any computer questions or problems that arise.
Quality Control/Assurance
Participants will enter answers to close-ended survey questions directly into a computer program as part of the ACASI interview. The ACASI / CAPI program will include checks for out-of-range and logically inconsistent answers, minimizing response and data-entry errors. To assess the quality and consistency of data collection, interview responses will be reviewed monthly to identify questions that are being skipped or answered incorrectly or inconsistently. A codebook of survey variables, response categories, and frequency distributions from survey responses will be developed. Frequency distributions will be examined for outliers and missing data; cross-tabulations will be examined to identify logically inconsistent responses that were not detected by the ACASI/CAPI program. Procedures for handling outliers and logical inconsistencies (e.g., recode, drop) and missing data (e.g., drop cases, impute values) will be developed.
Safeguards will be in place to detect individuals who try to re-enroll into the study. Because we need accurate names and addresses in order to locate participants for follow-up, all participants will be asked to show some form of identification each time they come in for an interview. Names and birthdates of new enrollees will be checked against the list of participants to make sure they have not already been interviewed. Another safeguard is that the projects employ a small group of study interviewers, so that it is likely they would recognize individuals who try to re-enroll into the study.
Quality assurance procedures will ensure that the intervention is implemented with fidelity and that evaluation measures are gathered properly to minimize possible sources of bias. To monitor the intervention, the intervention coordinator will regularly review the session summaries and case notes completed by the interventionists.
During recruitment for the evaluation study, study sites will perform monthly assessments of participant characteristics and adjust sampling strategies as needed. Variables to be examined include the age of participants and their ZIP code, indicating the part of the city where they live. If it is detected that most participants are in one age group or from one area of the city, the next group of seeds will be selected from ages and ZIP codes that are under-represented. Data managers at each site will examine outcome measures once one quarter of the sample has completed follow-up data collection, to identify any possible adverse reactions or unexpected outcomes.
Methods to Maximize Response Rates and Deal with Nonresponse
Response Rates and Retention
Due to the methods employed by this study, we expect a relatively high initial response rate. Previous studies using RDS find that one-half to two-thirds of coupons given to recruiters are returned by potential participants (Heckathorn, Semaan et al. 2002) (Ramirez-Valles, Heckathorn et al. 2005) (Stormer, Tun et al. 2006) (Wang, Falck et al. 2007); (Yeka, Maibani-Michie et al. 2006). Due to the incentive structure, recruiting participants are encouraged to give their coupons to social contacts who are likely to be eligible and interested in participating. Further, recruiting individuals will be able to inform potential participants about the key aspects of the study and time commitments with the referral palm card. Generally, persons given a coupon who are either ineligible or not interested in participating will not contact the study. In the CARDS study, the eligibility rate was 80%. Due to these factors and based on evidence from previous studies, we estimate that 60% of the individuals who call in to be screened will become study participants (Ramirez-Valles, Heckathorn et al. 2005).
Follow-up and attrition are among the most challenging methodological issues for longitudinal designs such as the one we are proposing (Leonard, Lester et al. 2003). Staff at each study site have extensive experience locating and collecting follow-up data from different population groups. In previous longitudinal studies, each study site has demonstrated a retention rate of at least 80%.
These strong follow-up rates will be replicated in this study by implementing similar follow-up strategies. This includes maintaining extensive locator information including whenever possible, email addresses and cell phone numbers and names, addresses and phone numbers of relatives and friends (See Attachment 12: Locator information form). Other strategies to enhance retention include frequent mailings and reminders, being flexible about interview times and locations, and providing reimbursement for the participant’s time. As discussed previously (Section A.9), the use of monetary incentives has previously been shown to improve retention of participants in projects similar to these (Kamb, Rhodes et al. 1998). The project interviewer will contact participants by email, mail or phone to update locator information. When we are not able to make direct contact with a participant, we will call friends and relatives on his locator sheet. The interviewer will not reveal to friends or relatives that the project focuses on bisexual men. After 5 call attempts, the respondent will be considered lost to follow-up. To ensure participants’ privacy, mailing envelopes and telephone messages will not identify the project’s purpose. All locator information will be kept in a locked file cabinet, in the office of the project’s data manager, separate from interview data files.
Despite these efforts, we expect that some participants will not answer some questions on the interview and some participants will not be available for follow-up interviews. We will use statistical techniques to impute values for missing data, so that information from all recruited participants is included in the analysis. We will use direct Maximum Likelihood techniques to estimate the means and covariance matrix for missing values. Using this technique, a series of regression equations will be computed to estimate variables with missing cases, using as predictors all variables for which there are complete data (Allison, 2002). We will use the AMOS software package for this imputation, which adjusts covariance to compensate for the increased correlation between variables due to the imputation technique.
Assessing Non-Response Bias
The use of an eligibility screener will allow comparison of the demographic and eligibility-related behavioral data on those who are eligible and ineligible and those who accepted participation and those who did not. Additionally, we will assess differential attrition by comparing the characteristics of participants retained in the study through all follow-up data collection with those who were lost to follow-up. These characteristics will include demographics as well as behavioral risk. Chi-square and t-tests will be used as appropriate to the measure used.
Accuracy and Reliability of Information Collected
The study sites will use ACASI to increase the reliability of the data collected. Previous studies have demonstrated that respondents are more likely to reveal engaging in sensitive behaviors in a computer-assisted self interview than in a face-to-face format (Gribble, Miller, et al., 1999; Des Jarlais et al., 1999; Turner et al). The assessment includes multiple instruments that have been previously tested with this or similar populations and have acceptable reliability as determined through statistical evaluation. See Attachment 9 for a summary of the measures used that have been applied with similar populations and the reliability statistics as reported in the literature.
Generalizability
The aim of this study is to determine the preliminary effectiveness of three unique and novel behavioral HIV risk reduction interventions. If evidence suggests that an intervention may be effective, more rigorous evaluation in multiple cities will be conducted. True generalizability from a single randomized controlled trial testing an intervention targeted to this population would be very difficult to achieve due to the lack of a sampling frame, the hard-to-reach nature of the population, the strict eligibility criteria, and the geographical limits involving in testing individual-level interventions.
The study sites will take steps to ensure that participants are diverse in terms of age and geographic residence in the city. During recruitment, sites will perform monthly assessments of these participant characteristics and adjust sampling strategies as needed. If it is detected that most participants are in one age group or from one area of the city, the next group of seeds will be selected from ages and ZIP codes that are under-represented.
Test of Procedures or Methods to be Undertaken
The measures to be used in the questionnaire were chosen with input from community collaborators and the CABs in each city. This process helped to insure that the questions are culturally-appropriate and use language that can be easily understood. We presented the final data collection instruments to the CAB, have received feedback, and addressed all concerns. We also piloted the data collection instrument with staff from the implementing agencies.
In addition, whenever possible, the investigators have selected measures that have been developed for and tested with populations of Black gay and bisexual men. Since there are few measures specifically for MSMW, we also included measures developed for MSM and Black heterosexual populations and adapted the language as needed. The following instruments contained within the assessment have been previously used with similar populations.
Instrument Name |
Population Previously Used With |
Publication |
OMB Approved? |
Social Network |
African-American and Latino MSM |
Pending |
No |
Lack of Support/Alienation |
African-American and Latino MSM |
Marks G, Millett GA, Bingham T, Bond L, Lauby J, Liau A, Murrill CS, Stueve A. (2009). Understanding differences in HIV sexual transmission among Latino and black men who have sex with men: The Brothers y Hermanos Study. AIDS and Behavior, 13(4), 682-90. |
No |
Disclosure |
African-American and Latino MSM |
Marks G, Millett GA, Bingham T, Bond L, Lauby J, Liau A, Murrill CS, Stueve A. (2009). Understanding differences in HIV sexual transmission among Latino and black men who have sex with men: The Brothers y Hermanos Study. AIDS and Behavior, 13(4), 682-90. |
No |
Privacy |
Gay men and women |
Mohr, J.J. & Fassinger, R.E. |
No |
Community Affiliation |
African-American MSMW |
MAALES Study – not yet published |
No |
Brief Symptom Inventory |
African-American youth |
Derogatis, L.R., Melisaratos N., (1983). The Brief Symptom Inventory (BSI): an introductory repot. Psychological Medicine, 13, 595-606. |
No |
Intimate Partner Violence |
African-American men |
Raj, A., Reed, E., Welles, S., Santana, M.C., & Silverman, J.
(2008). Intimate Partner Violence Perpetration, Risky Sexual
Behavior, and STI/HIV Diagnosis Among |
No |
Substance Use |
African-American men and women |
Cherpitel, C.J. (1995). Screening |
No |
Treatment Optimism |
MSM |
Kalichman, S., Eaton. L., |
No |
Self Efficacy for HIV Disclosure |
MSM and MSMW |
Parsons, J. Scrimshaw, E. Bimbi, D. Wolitski, R., Gomez, C.,
Halkitis, P. (2005). Consistent, inconsistent, non- |
No |
Spirituality |
African-American and Latino MSM |
Marks G, Millett GA, Bingham T, Bond L, Lauby J, Liau A, Murrill CS, Stueve A. (2009). Understanding differences in HIV sexual transmission among Latino and black men who have sex with men: The Brothers y Hermanos Study. AIDS and Behavior, 13(4), 682-90. |
No |
Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
The primary person involved with statistical aspects of this project and data analysis for this study is Dr. Sherri Pals of the CDC. The study design and development of data collection instruments were a collaborative effort between CDC, Nova University in Miami, FL, California State University in Los Angeles, CA, and Public Health Management Corporation in Philadelphia, PA. Each of the three organizations will be collecting the data for this study and analyzing data generated by the study. The federal staff members who are involved with the various aspects of designing and implementing the study are listed below.
Greg Millett, MPH
Centers for Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
(404) 639-1902
Heather Joseph, MPH
Centers for
Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-2636
hbj7@cdc.gov
Lisa Belcher, PhD
Centers for Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-5369
fcb2@cdc.gov
Ann O’Leary, PhD
Centers
for Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-1901
AOLeary@cdc.gov
Cari Courtney-Quirk, PhD
Centers for Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-1924
afv2@cdc.gov
Darrel Higa, PhD
Centers for Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-1924
afv2@cdc.gov
Steve Flores, PhD
Centers for
Disease Control and Prevention
Global AIDS Program
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-1910
SFlores@cdc.gov
Sherri Pals, PhD
Centers for Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E45
Atlanta, GA 30333
Phone: (404) 639-6147
Wayne Johnson, PhD
Centers for
Disease Control and Prevention
Division of HIV/AIDS Prevention
1600 Clifton Rd, NE, MS E37
Atlanta, GA 30333
Phone: (404)
639-1932
wdj0@cdc.gov
REFERENCES
Abdul-Quader, A., D. Heckathorn, et al. (2006). "Effectiveness of respondent-driven sampling for recruiting drug users in New York City: findings from a pilot study." Journal of urban health 83(3): 459-76.
Heckathorn, D. D., S. Semaan, et al. (2002). "Extensions of Respondent-Driven Sampling: A New Approach to the Study of Injection Drug Users Aged 18–25
" AIDS and Behavior 6(1): 55-67.
Kamb, M. L., F. Rhodes, et al. (1998). "What about money? Effect of small monetary incentives on enrollment, retention, and motivation to change behaviour in an HIV/STD prevention counselling intervention. The Project RESPECT Study Group." Sexually transmitted infections 74(4): 253-5.
Leonard, N., P. Lester, et al. (2003). "Successful recruitment and retention of participants in longitudinal behavioral research." AIDS Education and Prevention 15(3): 269-81.
Ramirez-Valles, J., D. Heckathorn, et al. (2005). "From networks to populations: the development and application of respondent-driven sampling among IDUs and Latino gay men." AIDS and Behavior 9(4): 387-402.
Stormer, A., W. Tun, et al. (2006). "An analysis of respondent driven sampling with Injection Drug Users (IDU) in Albania and the Russian Federation." Journal of urban health 83(6 Suppl): i73-82.
Wang, J., R. Falck, et al. (2007). "Respondent-driven sampling in the recruitment of illicit stimulant drug users in a rural setting: findings and technical issues." Addictive Behaviors 32(5): 924-37.
Yeka, W., G. Maibani-Michie, et al. (2006). "Application of respondent driven sampling to collect baseline data on FSWs and MSM for HIV risk reduction interventions in two urban centres in Papua New Guinea." Journal of urban health 83(6 Suppl): i60-72.
Page
File Type | application/msword |
File Title | OMB Review Request: Housing & Health Study |
Author | DHAP_USER |
Last Modified By | Thelma Elaine Sims |
File Modified | 2010-04-28 |
File Created | 2010-04-28 |