v4
April 23, 2015
2015 NHIS Incentives Test
The National Health Interview Survey (NHIS) has, like all surveys, experienced declining response rates. Examination of response rates over the past ten years shows that the overall response rate decreased by more than ten percentage points between 2004 and 2013, from 86.9% to 75.7% (National Center for Health Statistics, 2004-2013). As the gold standard for many health measures, it is incumbent upon the National Center for Health Statistics (NCHS) to try proven techniques to ensure the resulting data are of the highest possible quality. A series of activities intended to improve response rates and maintain data quality—with additional consideration for reducing survey costs—will be implemented this year. As one of these activities, for the first time, the NHIS will include a test of the use of respondent incentives. The primary purpose of the planned test is to examine the impact of incentives on response rates. We will also examine the impact of incentives on reducing any potential nonresponse bias and survey costs.
Employing a randomized controlled experimental design, we will send some families a $5 unconditional advance token incentive with the introductory letter. Further, a random half of families will be provided a $20 incentive for completion of both the Family component as well as the Sample Adult component of the interview. Because the Sample Adult cannot be contacted until the Family component has been completed, two separate $20 incentives are being offered for parity in the response task of the two components.
Support exists in the literature for both the planned $5 unconditional advance token incentive and the $20 completion incentive. In general, advance token incentives of varying amounts are a well-known tool to boost response rates. Although lower unconditional advance incentive amounts (e.g. $1 or $2) have been successful in increasing the likelihood of participation in very short, low-burden surveys (e.g., parents asked to provide their children’s addresses) (Mann, Lynn & Peterson, 2008), most studies with greater respondent burden (particularly relatively long in-person surveys) have not used such low incentives. A study that tested the impact of increasing prepaid incentives in single dollar increments on response rates in a large national mail survey identified a $5 unconditional incentive was the threshold at which the return rate was not only maximized but also plateaued compared to higher advance incentives (of $6-$10) (Trussell & Lavrakas, 2004). Most of the recent literature on completion incentives supports the $20 incentive amount, but is based on telephone, not face-to-face surveys. The one study examining completion incentives in a face-to-face survey was carried out as part of the in-home National Survey on Drug Use and Health. Results from that test indicated that a $20 incentive was associated with both an increase in response rates and a decrease in cost per case in a split sample experiment (Eyerman, Bowman, Butler & Wright, 2005). Thus, although literature on completion incentives used in comparable surveys is scarce, it appears that a $20 completion incentive may be the most effective strategy for increasing response rates in the NHIS.
The planned incentive experiment will be carried out in the Census regions overseen by the Denver, New York, and Philadelphia Census Regional Offices (RO) during the three months of May, June, and July, and involves two components. The first component is a test of the impact of an unconditional advance cash mailing. The second component is a test of the impact of a promised completion incentive. Random assignment will occur independently to (and within) each of the two components, for a two-by-two factorial research design. For the first component testing the impact of an unconditional advance incentive, a random half of families included in the experiment will be sent a $5 bill with the advance letter. For the second component testing the impact of a completion incentive, a random half of families will receive $20 each for participation in the family and the sample adult modules for a maximum of $40. Note: the sample adult module cannot be commenced unless the family module is first completed.) Families will be informed of the incentive in the advance letter, and will receive a $20 or $40 debit card mailed to them following in the thank you letter. [Completion incentives (to be paid out after the interview) were selected over prepaid incentives (to be paid out before the interview begins) because Census-internal restrictions around interviewers carrying money preclude prepayment at the start of the interview.] The maximum total incentive amount paid to any one family is $45: $5 prepaid, $20 for the family section and $20 for the adult section. No experimental design is attached to the sample child section in this test, as the family and sample adult sections are of unique interest. Thus, the decision to restrict the experimental design was made to ensure that the other components have sufficient power. Depending on the outcome of this experiment, future testing may include incentivizing response to the child section.
For both components of the planned incentive experiment, assignment to the test and control groups will be made via a software-generated pre-assigned table of random numbers, and will occur independently within each RO. To avoid scenarios in which next-door neighbors are assigned to different treatment groups (thus potentially complicating matters for the interviewers and leading to discontent among respondents), assignment to the different experimental conditions will occur at the segment level (which is typically equivalent to the block level). This minimizes the likelihood that participants living in close proximity will be assigned to different treatment conditions while maintaining the representativeness of the sample at the Regional Office and county levels.
Random assignment to the two components of the incentive experiment and to the treatment conditions within each component results in a clean 2x2 factorial design that maximizes analytic power and allows comparison between each of the four cells on a number of indicators and outcomes. Response rate indicators we will examine include differences between the three treatment conditions and the control group in: overall and module-specific response rate, completed interview rate, sufficient partial interview rate, and refusal rate. Data quality indicators we will examine include differences between the three treatment conditions and the control group in rates of “don’t know” and “refused” responses, and in population estimates of select outcome variables included in NCHS’ key indicator reports (e.g., health insurance coverage, failure to obtain needed medical care, cigarette smoking and alcohol consumption, and general health status). To assess the impact of incentives on sample composition, we will test for differences not only between the experimental groups described above, but also between survey completers and survey breakoffs, and between survey completers and national census data. Demographic characteristics of interest include age, race/ethnicity, and education level. Lastly, the primary cost indicator we will examine is the number of attempts (in-person visits and phone calls) required to obtain a completed interview (ascertainable using paradata) in each of the three treatment conditions and the control group.
The approximate total sample size for the incentives experiment is 12,200 families, whereby roughly 40% of cases will be located in the Denver RO and 30% of cases in the New York and Philadelphia RO, respectively. We anticipate a family response rate around 70%, which will result in approximately 8,500 completed interviews. Assuming 80% power, an alpha level = 0.05, and a design effect = 1, an n= 8,500 cases allows detection of a change in response rates as low as 2 percentage points for the completion incentive component of the experiment. For the unconditional advance incentive component, the number of completed cases in the treatment and control groups is roughly 4,250, which results in sufficient statistical power to detect differences in response rates around 2.9 percentage points given the same assumptions. For key substantive variables, we factored a design effect of 2.5 into the power calculations. Thus, for outcomes with prevalence rates of 10%, we will be able to detect changes in estimates as low as 2.5 percentage points; for more prevalent health conditions or outcomes, we will be able to detect changes in rates between 3 and 4 percentage points. For more rare outcomes, such as rates of “don’t know” or “refused” responses, we will be able to detect even smaller changes, given that the variance is maximized at 0.5, or 50% and moving away from 0.5 means a lower variance.
References
National Center for Health Statistics. Survey Description, National Health Interview Survey, 2004-2013. Hyattsville, Maryland. 2004-2013.
Davern M, Rockwood TH, Sherrod R, Campbell S. Prepaid monetary incentives and data quality in face-to-face interviews: Data from the 1996 survey of income and program participation incentive experiment. Public Opinion Quarterly 2003;67(1):139-147.
Eyerman J, Bowman K, Butler D, Wright D. The differential impact of incentives on refusals: Results from the 2001 national household survey on drug abuse incentive experiment. Journal of Economic and Social Measurement 2005;30(2-3):157-169.
Gelman A, Stevens M, Chan V. Regression modeling and meta-analysis for decision making: A cost-benefit analysis of incentives in telephone surveys. Journal of Business and Economic Statistics 2003;21(2):213-225.
Lepkowski JM, Mosher WD, Groves RM, West BT, Wagner J, Gu H. Responsive design, weighting, and variance estimation in the 2006-2010 national survey of family growth. In: Vital and Health Statistics, Series 2: Data Evaluation and Methods Research; 2013.
Lynn, P. The impact of incentives on response rates to personal interview surveys: Role and perceptions of interviewers. International Journal of Public Opinion Research 2001. 13(3): 326-336.
Mann SL, Lynn DJ, Peterson AV. The "downstream" effect of token prepaid cash incentives to parents on their young adult children's survey participation. Public Opinion Quarterly 2008;72(3):487-501.
Singer E, Kulka RA. Paying respondents for survey participation. Studies of Welfare Populations: Data Collection and Research Issues 2002:105-128.
Singer E, Ye C. The Use and Effects of Incentives in Surveys. Annals of the American Academy of Political and Social Science 2013;645(1):112-141.
Trussell N, Lavrakas PJ. The influence of incremental increases in token cash incentives on mail survey response: Is there an optimal amount? Public Opinion Quarterly 2004;68(3):349-367.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Sarah S. Joestl |
File Modified | 0000-00-00 |
File Created | 2021-01-25 |