Download:
pdf |
pdfAmerican Association for Public Opinion Research
Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis
Author(s): Allan H. Church
Source: The Public Opinion Quarterly, Vol. 57, No. 1 (Spring, 1993), pp. 62-79
Published by: Oxford University Press on behalf of the American Association for Public Opinion Research
Stable URL: http://www.jstor.org/stable/2749438 .
Accessed: 26/09/2011 09:42
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
American Association for Public Opinion Research and Oxford University Press are collaborating with JSTOR
to digitize, preserve and extend access to The Public Opinion Quarterly.
http://www.jstor.org
ESTIMATING THE EFFECT OF INCENTIVES
ON MAIL SURVEY RESPONSE RATES:
A META-ANALYSIS
ALLANH. CHURCH
Abstract This article reports the results of a meta-analysis of
38 experimental and quasi-experimental studies that implemented
some form of mail survey incentive in order to increase response
rates. A total of 74 observations or cases were classified into one
of four types of incentive groups: those using prepaid monetary
or nonmonetary rewards included with the initial survey mailing
and those using monetary or nonmonetary rewards as conditional
upon the return of the survey. Results were generated using an
analysis of variance approach. The overall effect size across the
74 observations was reported as low to moderate at d = .241.
When compared across incentive types, only those surveys that
included rewards (both monetary and nonmonetary) in the initial
mailing yielded statistically significant estimates of effect size (d
= .347, d = .136). The average increase in response rates over
control conditions for these types of incentives was 19.1 percent
and 7.9 percent, respectively. There was no evidence of any impact for those incentive types offering rewards contingent upon
the return of the survey.
Introduction
Data collection in the form of mailed questionnaires, long accepted as
the standard method for large sample surveys, has been implemented
across such diverse fields as marketing, advertising, business, and the
political and social sciences (Aiken 1988; Alwin and Campbell 1987;
Dillman 1978; Greenberg and Manfield 1957; Groves 1987; Peterson
a Ph.D. candidate in organizational psychology, Department of
Social and Organizational Psychology, Teachers College, Columbia University. The
author would like to acknowledge the contribution of R. Gary Bridge of Teachers College, Columbia University, in the initial stages of this research. Without his enthusiasm
for bringing together discrepant results and challenging assumptions about the mail
survey literature, this article would not have been possible. The author would also like
to thank the POQ editor, Howard Schuman, and the anonymous reviewers for their
excellent comments on this article.
ALLAN H. CHURCH iS
Public Opinion Quarterly Volume 57.62-79 ? 1993 by the American Association for Public Opinion Research
All rnghtsreserved. 0033-362X/93/5701-0005$02.50
Incentives in Mail Surveys: A Meta-Analysis
63
1975; Shosteck and Fairweather 1979). While citing the mailed questionnaire's high degree of utility, however, many survey practitioners
have been plagued by response rate problems (e.g., Eisinger et al.
1974). One popular method for increasing response rates that has received significant attention in the literature is the use of incentives.
Researchers and practitioners often implement some kind of a reward, compensation, or token value to increase the respondent's motivation to complete the survey (e.g., Armstrong and Overton 1971;
Bevis 1948; Dohrenwend 1970; Gelb 1975; Gunn and Rhodes 1981;
Lockhart 1984; Sudman and Ferber 1974; Wolfe and Treiman 1979).
Variations on the types of rewards implemented across studies in the
literature have been those of form (monetary and nonmonetary) and
timing (sent initially with the questionnaire or contingent on the returned response). Although often referred to as a single factor in mail
survey methodology, incentives can and should be classified into four
distinct types for effect analyses, based on the crossed results of the
two dimensions of form and timing. Thus the four groupings or types
would consist of monetary and nonmonetary incentives mailed with
the survey and monetary and nonmonetary incentives given on the
return of the questionnaire (henceforth to be referred to as incentive
or study types MI, 01, MR, and OR, respectively).
Despite the frequent usage of these kinds of rewards or incentives
for increasing response rates in mail survey work, there has been little
consistency among the specific effect sizes reported in the literature.
This disparity of results, which is due in large part to the lack of
differentiation among these four incentive types and their relative effects on response rates, makes it difficult for practitioners when planning their research.
Thus, the purpose of this article is to fill a need in the literature
by providing an applied meta-analysis for the area of monetary and
nonmonetary incentives. The results of this research will yield several
estimates of the specific effects on increasing mail survey return rates
for each type of incentive approach. Although other authors have provided similar kinds of incentive-related effect size summaries, ranging
from purely qualitative (e.g., Kanuk and Berenson 1975; Linsky 1975)
to more sophisticated quantitative approaches (e.g., Armstrong and
Lusk 1987; Eichner and Habermehl 1981; Fox, Crask, and Kim 1988;
Goyder 1982; Heberlein and Baumgartner 1978; Yu and Cooper 1983),
none of these researchers has detailed the relative differential effects
of response rates for each of the four incentive types. What also separates my study from those reported in the past is the larger number of
observations, and therefore power, in the determination of the metaeffects of these four approaches to the use of incentives in mail
surveys.
64
Allan H. Church
The initial hypotheses concerning the following meta-analysis of the
effect of incentives on mail survey response rates were as follows:
HI. The overall effect for all incentives tested will yield significant
differences from nonincentive controls or comparison groups.
H2. All four incentive types (MI, MR, 01, OR) will yield significant and/or meaningful increases in overall response rates relative to respective controls or comparison groups.
H3. Monetary incentives (MI and MR) will yield greater overall increases (effects) than nonmonetary incentives (01 and OR).
H4. Prepaid monetary incentives (MI) will yield the greatest effect
over controls or comparison groups.
H5. These results will generalize across different populations and
years in which studies were conducted.
Method
BACKGROUND
The meta-analysis framework outlined by Hunter, Schmidt, and Jackson (1982) was implemented for hypothesis testing and literature synthesis. Meta-analysis techniques, although differing in their specifics,
often yield important findings and allow for greater generalizability and
application of the results to future research (Armstrong and Lusk 1987;
Fox, Crask, and Kim 1988; Guzzo, Jette, and Katzell 1985; Houston
and Ford 1976). Other methods of quantitative review, such as multiple
regression analysis (Armstrong 1975; Goyder 1982; Heberlein and
Baumgartner 1978) and cumulative chi-squares (Yu and Cooper 1983)
have also been applied to divergent mail survey data sets with varying
degrees of success.
PROCEDURE
Following Hunter, Schmidt, and Jackson's (1982) meta-analysis framework, the literature search consisted of locating all published studies
concerning the use of incentives in mail surveys. Several seed sources
(Psychological Abstracts, Sociological Abstracts, Public Opinion
Quarterly, and the Journal of Marketing Research) were referenced
manually, using abstracts and indexes to establish the initial set of
studies to be included in the analysis. References from each study
located were then used to determine further sources, following an iterative process until all potential sources of data were exhausted.
Although the potential error bias involved in missing the results of
Incentives in Mail Surveys: A Meta-Analysis
65
nonpublished data was recognized, it was not considered critical due
to the wide range of effect sizes, nonsignificant results, and multiple
factors and methodologies employed across the final set of studies
collected. As Fox, Crask, and Kim (1988) have noted, many of these
types of incentive-related studies report greater percentages of nonsignificant results relative to other types of academic research. Thus, the
impact of this type of bias, although still unknown, was probably small
to minimal on the findings presented below.
The initial criterion for inclusion in this meta-analysis stipulated that
the study report at least one response rate in conjunction with monetary or nonmonetary incentives in a mailed survey. Monetary surveys
consisted of those using cash or checks, while nonmonetary incentives
were defined as those studies that used any extra item as an incentive
above and beyond the normal procedure for most mail surveys. Those
studies in which only the results of the survey were offered to the
subject as an incentive to participate were not included in the analysis,
since this is often considered good research practice and would therefore introduce an uncontrolled factor (Levine and Gordon 1958).
The final criterion for inclusion in the meta-analysis was the presence of a control or comparison group against which the incentive
condition could be indexed. Although many studies reporting on the
effects of incentives were collected and entered into the data base,
only those with experimental control or quasi-experimental comparison groups were included in the final analysis for determining effect
size estimates and variance proportions.
Once collected, all studies were coded for analysis using a method
derived from prior research and theory (Armstrong and Lusk 1987;
Eichner and Habermehl 1981; Fox, Crask, and Kim 1988; Heberlein
and Baumgartner 1978; Hunter, Schmidt, and Jackson 1982; Yu and
Cooper 1983). The result of this coding process yielded a unit of analysis consisting of the record or observation made from each reported
pair of response rates. Because all observations included in the final
data set were compared on a treatment versus control/comparison
basis, the relative impact of other pertinent variables known to affect
survey returns was minimized (Armstrong and Lusk 1987; Fox, Crask,
and Kim 1988). Thus, the effects of survey length, content, return
postage type, color, number of items, and so on, were theoretically
controlled in the comparison score between control and incentive responses, since both conditions presumably had the same physical qualities and mailing procedures.
Information on these variables was still collected where possible,
however, in order to test the degree to which these elements would
serve as incentive effects moderators. Therefore, the categorical or
character (i.e., nonnumeric) variables of author(s), journal or source of
66
Allan H. Church
information, basic sample description, brief description of the manifest
content of the survey, identified survey sponsor, type of nonmonetary
incentive, and timing (initial or on return) of the incentive were included in each observation in the data base. Numerical variables consisted of year published, number of survey pages, total number of
questionnaires sent and number returned, level of monetary incentive,
control sample size, control group response rate, incentive group sample size, and incentive group response rate.
Although efforts were made to include other potentially useful variables (e.g., number of items in the questionnaire, sampling method,
saliency of the survey topic), many of the authors were too scant in
their methodological descriptions to determine these details with any
accuracy. On analysis, these variables would have had values for only
30-40 percent of the total number of observations, rendering them
useless for overall analyses. Similarly, two of the variables initially
coded and entered (manifest content of the survey and the identified
survey sponsor) were ultimately excluded from analysis as well, due
to missing data. The variable representing the number of questionnaire
pages was also dropped from the overall analyses with a missing data
rate of 49 percent. This variable, however, was included in some simple correlation matrices to look for possible relationships between survey length and other factors in the data base.
Techniques described by Hunter, Schmidt, and Jackson (1982) and
Hunter and Schmidt (1990) were used to compute the effect sizes and
associated variance and error measures. Weighted means by sample
size, unweighted means, and medians for several of the variables were
also computed. An analysis of variance (ANOVA) approach (Cliff
1987; SAS Institute 1985; Tabachnick and Fidell 1989) was used for
determining the overall significance of effects and for testing the different incentive type means.
DEPENDENT
VARIABLES
Several different but related dependent measures were initially generated from the pair of response rates (control and incentive) entered
into the data base for comparison purposes with previous articles and
reviews. Each of these dependent variables was then examined for the
extent to which it contributed any new information about the effects
in question. Most of the variables did not. Thus, values for the relative
incremental cost per individual response achieved (Berry and Kanouse
1987; Cox 1976; Hackler and Bourgette 1973; Kephart and Bressler
1958; Robinson and Agisim 1951; Zusman and Duby 1987), the percent
decrease (pdn) in nonresponse rate (Armstrong 1975; Linsky 1975),
the number of percentage points (pntir) increase in response rates (Ka-
Incentives in Mail Surveys: A Meta-Analysis
67
nuk and Berenson 1975; Zusman and Duby 1987), and the percent
increase (pir) in response rate were computed for each observation
but ultimately were dropped from further analyses.
The dependent variable indexing change actually used for analysis
was the formula effect size (d) (Hunter, Schmidt, and Jackson 1982).
Effect size, defined as the "difference between means in standard
score form, i.e., the ratio of the difference between means to the standard deviation" (Hunter, Schmidt, and Jackson 1982, p. 97), was used
because it is a standard measure of the impact of an experimental
manipulation against which comparisons can be made across other
types of research (Cohen 1977; Hedges and Olkin 1985). Thus, the
effect size for a given study or series of studies answers the question,
"How large was the treatment effect?" (Hunter and Schmidt 1990, p.
336), not just whether or not the observed effect was significantly
different from zero. Independent variables included in the analysis
consisted of study type (MI, MR, 01, OR), year of publication, sample
type, and publication source of results.
Results
DATA BASE DEMOGRAPHICS
The literature search yielded a total of 38 studies reporting on the
effects of different incentives. From this set of studies, 74 individual
observations (records) had data for both incentive and comparison
groups enabling the computation of the dependent measure d. Thus, 74
observations were coded and entered into the data base for subsequent
analysis, representing information from a total of 38 different published
sources (see the Appendix for a listing of studies included in the metaanalysis). All reported results are based on effects from these 74 records. The mean and median sample sizes used across observations
were 664 and 329 survey respondents, respectively.
There was a wide variety of target populations for survey respondents across the 74 observations, ranging from urban household residents to out-of-state drivers. In order to test the differential effects of
incentive type on these target populations, observations were classified
into five groups based on the descriptions of the respondents. These
groups consisted of general population (55 percent), students (8 percent), technical people (10 percent), business people-administrators
and executives (16 percent), and medical personnel (11 percent).
The breakdown of data by study type was as follows: 43 records
were taken from studies in which monetary incentives were mailed
with the questionnaire (MI), 9 records were from studies using mone-
Allan H. Church
68
tary incentives as contingent on the returned survey (MR), 12 records
were from studies that implemented other nonmonetary incentives sent
initially with the questionnaire (01), and the final 10 observations were
taken from research that used other incentives and required a returned
response form (OR).
Listings of the data showed levels of monetary incentives offered
by researchers ranging from $.01 to $5.00. When standardized and
adjusted to 1989 dollars using the Consumer Price Index (CPI; 1974,
1989), the cash incentives showed considerable variation, ranging between $.036 and $9.29, with a median of $.86 and a mean of $1.38.
The nonmonetary incentives were more interesting in their diversity,
however, with such items as entry in a lottery, donations to charity,
coffee, books, pens, key rings, golf balls, tie clips, stamps, and even
a turkey (e.g., Knox 1951).
Observations used for analysis had been conducted across a time
span of over 50 years, dispersed between time periods as follows: 13
percent before 1960, 19 percent between 1961 and 1970, 34 percent
between 1971 and 1980, and 34 percent after 1981. Although no author
or set of authors dominated in contributions to the data set (the most
being 8 percent for any single researcher), over half the observations
did originate from two of the seed journals: the Journal of Marketing
Research (26 percent) and Public Opinion Quarterly (31 percent). The
remainder of the records were well distributed among 14 other sources.
ESTIMATES
OF EFFECT
SIZE
Overall, an overwhelming majority of the total 74 observations (89
percent) yielded some improvement in response rates relative to the
control or comparison group. Interestingly but not surprisingly, while
only 1 percent provided absolutely no evidence of an effect, 10 percent
of the incentive conditions actually yielded decreases in their survey
returns. Table 1 shows the unweighted and weighted means and the
maximum, minimum, and median values for the incentive and control
response conditions as well as for the dependent measure of effect size
(d) used in the subsequent analysis. The weighted formula effect size
for all 74 observations was d = .241. This represents an overall average increase in response rate of 13.2 percentage points between the
incentive and control conditions. The median effect size was slightly
lower than the weighted mean effect at d = .231. The formula variance
was computed to be s2 = .035, with a sampling error of .0061. This
yielded an estimate of the true standard deviation of the effect size at
s = .170 (formulas for these computations were taken from Hunter,
Schmidt, and Jackson [1982]). If the underlying effect was basically
the same across all studies, this estimate of dispersion should have
S
wo
S
0
00kf
. )
40,
W:
*:
cn
:
=;
V)
Xj)
ar?jC)
OCc
X
Allan H. Church
70
been close to zero, or very small relative to the mean effect size. Since
d
=
.241 is greater than s = .17 by a factor of less than two, these
figures provided evidence for differential effects across incentive
types, moderator variables, or large, unaccounted-for-errorcomponents.
A test for the homogeneity of effect sizes (Hedger and Olkin 1985;
Hunter and Schmidt 1990) confirmedthis finding, indicatingthat the
74 observations were probably not representinga common phenomenon and therefore should not be pooled into one overall estimate for
analysis (i.e., some moderatingvariableor variablesexisted that were
accounting for differentialeffects among various groups of observations). As Hedges and Olkin have also noted, however, when sample
sizes are very large (rangingamong these 74 observationsfrom 20 to
5,000) and the d values do not vary greatly, "it is worth studying the
variationin the values of d, since rathersmall differences may lead to
large values of the test [homogeneity] statistic . . . [and] the investiga-
tor may elect to pool the estimates" (1985, p. 123). Thus, these data
were subjected to further analyses to look for main effects, interactions, and moderating variables and their respective impact on response rates.
Simplecorrelationsbetween the dependentmeasure(d) and the continuousindependentvariablesof numberof pages in survey, year published, and adjusted incentive value yielded no significantresults at
all. When examined by incentive type, however, the MI group of observations yielded significantcorrelations between the CPI adjusted
incentive value and the dependent measure effect size (r = .45; t =
3.23, p < .01, df = 1,41). Again, there were no significantrelationships
between changes in response rate and either survey length or year of
publicationamong these separate correlationscomputed by incentive
type.
An analysis of variance (ANOVA) was conductedin orderto (1) test
the presence of any overall and/or interaction effects while including
all possible independent variables for analysis among the 74 observations collected and (2) to best control for inflated Type I error rates
(Cliff 1987; SAS Institute 1985; Tabachnick and Fidell 1989). The dependent variable used for this analysis was the formula effect side (d).
The independent variables included in the analysis consisted of
study type (MI, MR, 01, OR), a coded version of year of publication,
the type of respondents used in the mail survey, and the journal source
from which the study was drawn (all of these variables were based on
the distributions described above). The associated two-way interaction
effects between these variables were also included for exploratory purposes. Incentive value and survey length were not included in these
analyses because the data did not exist across all observations. Other
71
Incentives in Mail Surveys: A Meta-Analysis
than for those specific studies using monetary rewards (types MI and
MR), very few researchers included the actual cash value for the incentive used. Survey length was simply a missing variable in many cases.
While the overall ANOVA yielded a significant F(32,41) = 3.61, p
< .001, R2 = .737, only the variable representing study type (MI, MR,
01, OR) resulted in a significant univariate main effect, with F(3,41)
= 28.11, p < .001. Effect sizes were not significantly different across
various types of respondent populations, journal source, or year of
publication (as the simple correlational analysis suggested). Likewise,
the test for the interactions among the independent variables were
also nonsignificant. Thus, only study type was selected for further
exploration of differences.
Investigating the data by study type showed differential outcomes
for each incentive group. Effect sizes for the four types were computed
at d = .347, d = .085, d = .136, and d = .020, for the incentive groups
MI, MR, 01, and OR, respectively. Table 2 contains the associated
computational elements and variance estimates, as well as the
weighted mean incentive and control response rates for each of the
four incentive conditions. These effect sizes represent comparable average increases in incentive versus control response rates of 19.1, 4.5,
7.9, and 1.2 percentage points for the four respective types of rewards.
Interestingly, further analysis also revealed that the least-squared
effect size means (or marginal means, as they are often called; SAS
Institute [1985]) calculated for incentive types MR and OR, both offering rewards on return, were not significantly different from zero (t =
1.25 and t = 0.56, respectively). Thus, these two types of incentives
Table 2. Effect Size, Associated Variance Estimates, and Weighted
Mean Response Rates for Four Types of Incentives
Monetary
Initial
(MI)
.347**
Effect size (d)
.018
Observed variance
.006
Sampling error
.112
Estimate of true standard deviation
53.0
Response incentive (%)
34.2
Response comparison (%)
N
(43)
** p < .001.
Return
(MR)
Nonmonetary
Initial
(01)
Return
(OR)
.020
.136**
.085
.014
.010
.016
.004
.006
.015
.100
.057
.032
30.1
36.8
41.1
33.1
28.9
29.1
(10)
(12)
(9)
Allan H. Church
72
had no statistically significant effect or substantive impact on response
rates. Next, post hoc comparisons were conducted using a Bonferroni
approach, whereby simple t-tests were controlled for inflated error
rates by adjusting the p values for acceptance by the number of comparisons being made (Howell 1982). These t-test comparisons of the
four incentive type means showed that the computed effect for the MI
group of observations (d = .347) was significantly greater than for all
three other types. The effect size (d = .136) for the nonmonetary
initial mailing studies (01) was significantly greater than the effect (d
= .020) for the nonmonetary incentives (OR) provided on return as
well, but not for the mean effect (d = .085) of the monetary on return
(MR) studies. Details of these comparisons can be found in table 3.
Discussion
Clearly, the findings of the present meta-analysis have demonstrated
that incentives do indeed have substantial positive effects on mail survey return rates. The results of the analysis of variance indicated a
significant overall effect for the use of any incentive in increasing mail
survey responses, thus supporting Hi.
An examination of the appropriate means and variances, however,
suggested that Hi is not particularly meaningful given the degree to
which effect size estimates differed among the four incentives types.
Any overall effect for incentives could only be meaningful if all those
types of rewards had some degree of positive impact on response rates.
Given this criterion, the results of the more detailed study type mean
Table 3. Results of Multiple
Comparisons for Effect Size by
Incentive Type
Effect size (d)
MI vs. MR
MI vs. 01
MI vs. OR
MR vs. OI
MR vs. OR
01 vs. OR
NOTE.-(df = 1,41).
*p < .05.
** p < .001.
t
3.67**
4.82**
8.13**
-.64
.85
2.21*
Incentives in Mail Surveys: A Meta-Analysis
73
comparisonsindicatedthat it is, in fact, inappropriateand incorrectto
assume that any reward or incentive used in a mail survey will result
in improvedresponse rates. Rather, there was evidence of significant
effects, that is, meaningfulincreases in response rates, only for the
two initial mailingincentive conditions (MI and 01) and not for those
where the incentive was made contingent on returnedresponses (MR
and OR). Thus, only incentives providedwith the initialmailingof the
survey instrumenthad any significantor meaningfulpositive impact
on response rates, which served to disprove H2.
Similarly,there was no supportfor H3, statingthat monetary-related
improvementsin response rates would be greater overall than those
based on nonmonetaryincentives. The obtained patternof significant
results solely for MI and 01 suggests that the relative timing of the
incentive is more importantthan the natureor formof what is included.
It appears that people respond more favorably to incentives that are
included with the questionnairerather than those that are offered as
contingenton the completed returnand good faith of the mail survey
practitioner.This is, perhaps, the most importantfindingof this metaanalysis.
Hypothesis 4 was supported by the analysis results, replicating
otherfindingsin the literature(e.g., Linsky 1975;Yu and Cooper 1983).
Those studies in the data base offering prepaid monetary incentives
yielded by far the greatest benefits over comparisongroups, with an
average increase of 19.1 percentage points and an effect size of d =
.347. This difference between incentive and control conditions represents a 65 percent mean increase in response when using a monetary
incentive with the initial mailing. In comparison, Yu and Cooper
(1983),using a more limited numberof observations,reportedan average response enhancementof 16 percentage points, or an average increase of around58 percent.
Also, based on the strong correlation(r = .45) between effect size
and cash value of the incentive, it would seem that the greater the
value, the greater the increase in the response rate. Interestingly,Yu
and Cooper (1983) in their analysis noted an even stronger positive
correlationbetween incentive value and increases in returns(r = .61).
While they concluded that the relationship between these variables
was very strong and linear in nature, other researchershave posited a
diminishingreturnsmodel to best representthis effect (e.g., Armstrong
1975;Fox, Crask, and Kim 1988). Furtheranalysis and modeling still
needs to be conducted, however, to clearly delineate and provide a
more refinedestimablefunction of the true relationshipbetween incentive value and increased response rates.
Althoughthe magnitudeof the prepaidmonetaryeffect of d = .347
may seem small to medium-size relative to standardqualitativecon-
74
Allan H. Church
ventions (Cohen 1977) and other reported meta-analysis effects coefficients in the experimental literature (e.g., Guzzo, Jette, and Katzell
1985), simple exploratory comparisons between hypothesized incentive and control return rates suggested that differences of 70 or more
percentage points would be necessary to yield effect sizes greater than
1.0. Thus, the very nature of the percentage statistic provides an upper
limit to the maximum effect size value obtainable from research using
response rates as the primary dependent variable. Furthermore, metaanalyses conducted on other mail survey response enhancers (e.g.,
first class postage, prenotification by mail, university sponsorship, and
follow-up letters) have produced effect sizes of this magnitude as well
(Armstrong and Lusk 1987; Fox, Crask, and Kim 1988; Yu and Cooper
1983). From this perspective, an effect size of d = .347 seems impressively large. And, given that it represents an average increase of 19
percentage points, it is certainly meaningful enough for most practitioners to consider adopting as a response rate enhancement methodology.
The last hypothesis, H5, proved somewhat difficult to test given the
previously cited problems with missing values and scanty documentation of methods. It was possible, however, to test the relative contribution and possible interaction effects of year of publication, study type,
and sample composition to the overall reported effects from the 74
observations. As noted in the results and originally hypothesized, only
incentive or study type yielded a significant contribution to understanding the variability in effect sizes. None of the other main or interaction effects of the independent variables in the analysis was significant. Thus, the results of this meta-analysis do seem to generalize
across different samples and time periods.
It is important to remember, however, that the relative effects of
other variables not tested in these analyses could have interacted with
incentive type to enhance or inhibit the results (Jones 1979; Jones and
Lang 1980; Wiseman 1973). The inability to test these variables or
factors is simply a problem of missing data. Those authors that have
attempted complex regression models in the past, predicting response
rates from numerous indicators, have also encountered this problem,
often using reduced sets for analysis, which result in data fragmentation and severe multicollinearity problems (Eichner and Habermehl
1981; Goyder 1982; Heberlein and Baumgartner 1978). Unfortunately,
this problem of the relative contribution of related variables will continue until there are enough studies in the literature with fully detailed
and documented methodology sections to test the specific combinations of compounded effects.
In conclusion, the results of this meta-analysis suggest that both
Incentives in Mail Surveys: A Meta-Analysis
75
monetary or nonmonetary incentives mailed with the survey instrument should provide improved return rates worth the investment of
time and effort involved in their implementation. It is clear, however,
that monetary incentives included in the initial mailing (MI) should be
the method of choice for improving respondent return rates. The use of
prepaid cash rewards for completing surveys had the most significant
impact on increasing response rates among the observations in this
meta-analysis.
There is also adequate support for including nonmonetary incentives
with the initial mailing. Even though there were practically as many
kinds of incentives offered as studies reviewed, there is a sizable if
moderate effect (an additional 7.9 percent average increase in returns
over control conditions) when including some token of appreciation
with the survey. The decision is left to the mail survey practitioner,
however, as to whether this additional 7.9 percent is worth investing
in the use of a nonmonetary incentive.
It is also apparent from the results of this meta-analysis that practitioners should avoid using incentive systems that offer rewards, either monetary or otherwise, as contingent upon a returned questionnaire. These types of incentive plans are simply not worth the energy
involved. They offer neither statistical nor meaningful enhancements
to response rates with any consistency.
Appendix
Studies Included in Meta-Analysis
Biner, P. M. 1988. "Effects of Cover Letter Appeal and MonetaryIncentives
on Survey Response: A ReactanceTheory Application."Basic and Applied
Social Psychology 9:99-106.
Blumberg,H. H., C. Fuller, and A. P. Hare. 1974. "Response Rates in Postal
Surveys." Public Opinion Quarterly 38:113-23.
Blythe, B. J. 1986. "IncreasingMailed Survey Responses with a Lottery."
Social Work Research and Abstracts 22:18-19.
Brennan, R. D. 1958. "TradingStamps as an Incentive in Mail Surveys."
Journal of Marketing 22:306-7.
Cook, J. R., N. Schoeps, and S. Kim. 1985. "ProgramResponses to Mail
Surveys as a Function of Monetary Incentives." Psychological Reports
57:366.
Denton, J. J., C. Tsai, and P. Chevrette. 1988. "Effects on Survey Responses
of Subjects, Incentives, and Multiple Mailings." Journal of Experimental
Education 56:77-82.
Erdos, P. L., 1970. Professional Mail Surveys. New York: McGraw-Hill.
76
Allan H. Church
Furse, D. H., and D. W. Stewart. 1982. "Monetary Incentives versus Promised Contribution to Charity: New Evidence on Mail Survey Response."
Journal of Marketing Reseach 19:375-80.
Furse, D. H., D. W. Stewart, and D. L. Rados. 1981. "Effects of Foot-in-theDoor, Cash Incentives, and Follow-ups on Survey Response." Journal of
Marketing Research 18:473-78.
Godwin, R. K. 1979. "The Consequences of Large Monetary Incentives in
Mail Surveys of Elites." Public Opinion Quarterly 43:378-87.
Golden, L. L., W. T. Anderson, and L. K. Sharpe. 1980. "The Effects of
Salutation, Monetary Incentive, and Degree of Urbanization on Mail Questionnaire Response Rate, Speed, and Quality." In Advances in Consumer
Research, ed. K. B. Monroe, pp. 292-98. Ann Arbor, MI: Association for
Consumer Research.
Goodstadt, M. S., L. Chung, R. Kronitz, and G. Cook. 1977. "Mail Survey
Response Rates: Their Manipulation and Impact." Journal of Marketing
Research 14:391-95.
Hackler, J. C., and P. Bourgette. 1973. "Dollars, Dissonance, and Survey
Results." Public Opinion Quarterly 37:276-81.
Hansen, R. A. 1980. "A Self-Perception Interpretation of the Effect of Monetary and Nonmonetary Incentives on Mail Survey Respondent Behavior."
Journal of Marketing Research 17:77-83.
Hopkins, K. D., B. R. Hopkins, and I. Schon. 1988. "Mail Surveys of Professional Populations: The Effects of Monetary Gratuities on Return Rates."
Journal of Experimental Education 56:173-75.
Hopkins, K. D., and J. Podolak. 1983. "Class-of-Mail and the Effects of Monetary Gratuity on the Response Rates of Mailed Questionnaires." Journal of
Experimental Education 51:169-70.
Houston, M. K., and R. W. Jefferson. 1976. "The Negative Effects of Personalization on Response Patterns in Mail Surveys." Journal of Marketing
Research 13:114-17.
Hubbard, R., and E. L. Little. 1988. "Promised Contributions to Charity and
Mail Survey Responses." Public Opinion Quarterly 52:223-30.
Huck, W. W., and E. M. Gleason. 1974. "Using Monetary Inducements to
Increase Response Rates from Mailed Surveys: A Replication and Extension
of Previous Research." Journal of Applied Psychology 59:222-25.
Kephart, W. M., and M. Bressler. 1958. "Increasing the Response to Mail
Questionnaires: A Research Study." Public Opinion Quarterly 22:122-32.
Kimball, A. E. 1961. "Increasing the Rate of Return in Mail Surveys." Journal
of Marketing 25:63-64.
McDaniel, S. W., and C. P. Rao. 1980. "The Effect of Monetary Inducement
on Mailed Questionnaire Response Quality." Journal of Marketing Research 17:265-68.
Maloney, P. W. 1954. "Comparability of Personal Attribute Scale Administration with Mail Administration with and without Incentive." Journal of Applied Psychology 38:238-39.
May, R. C. 1960. "Which Approach Gets the Best Returns in Mail Surveys?"
Industrial Marketing 45:50-51.
Mizes, J. S., E. L. Fleece, and C. Roos. 1984. "Incentives for Increasing
Incentives in Mail Surveys: A Meta-Analysis
77
Return Rates: Magnitude Levels, Response Bias, and Format." Public
Opinion Quarterly 48:794-800.
Nederhof, A. J. 1983. "The Effects of Material Incentives in Mail Surveys:
Two Studies." Public Opinion Quarterly 47:103-11.
Newman, S. W. 1962. "Differences between Early and Late Respondents to
a Mailed Survey." Journal of Advertising Research 2:27-39.
Paolillo, J. G. P., and P. Lorenzi. 1984. "Monetary Incentives and Mail Questionnaire Response Rates." Journal of Advertising 13:46-48.
Pressley, M. M., and W. L. Tullar. 1977. "A Factor Interactive Investigation
of Mail Survey Response Rates from a Commercial Population." Journal of
Marketing Research 14:108-11.
Pucel, D. J., H. F. Nelson, and D. N. Wheeler. 1971. "Questionnaire Followup Returns as a Function of Incentives and Responder Characteristics."
Vocational Guidance Quarterly 19:188-93.
Robertson, D. H., and D. H. Bellenger. 1978. "A New Method of Increasing
Mail Survey Responses: Contributions to Charity." Journal of Marketing
Research 15:632-33.
Shewe, C. D., and N. G. Cournoyer. 1976. "Prepaid vs. Promised Monetary
Incentives to Questionnaire Response: Further Evidence." Public Opinion
Quarterly 40:105-7.
Shuttleworth, F. K. 1931. "A Study of Questionnaire Technique." Journal of
Educational Psychology 22:652-58.
Watson, J. J. 1965. "Improving the Response Rate in Mail Research." Journal
of Advertising Research 5:48-50.
Whitmore, W. J. 1976. "Mail Survey Premiums and Response Bias." Journal
of Marketing Research 13:46-50.
Wiseman, F. 1973. "Factor Interaction Effects in Mail Survey Response
Rates." Journal of Marketing Research 10:330-33.
Wotruba, T. R. 1966. "Monetary Inducements and Mail Questionnaire Response." Journal of Marketing Research 3:398-400.
Zusman, B. J., and P. Duby. 1987. "An Evaluation of the Use of Monetary
Incentives in Postsecondary Survey Research." Journal of Research and
Development 20:73-78.
References
Aiken, L. R. 1988. "The Problem of Nonresponse in Survey Research." Journal of
Experimental Education 56:116-19.
Alwin, D. F., and R. T. Campbell. 1987. "Continuity and Change in Methods of
Survey Data Analysis." Public Opinion Quarterly 51:S139-S155.
Armstrong, J. S., and E. J. Lusk. 1987. "Return Postage in Mail Surveys: A
Meta-Analysis." Public Opinion Quarterly 51:233-48.
Armstrong, J. S., and T. Overton. 1971. "Brief vs. Comprehensive Descriptions in
Measuring Intentions to Purchase." Journal of Marketing Research 13:114-17.
Armstrong, S. J. 1975. "Monetary Incentives in Mail Surveys." Public Opinion
Quarterly 39:111-16.
Berry, S. H., and D. E. Kanouse. 1987. "Physician Response to a Mailed Survey: An
Experiment in Timing of Payment." Public Opinion Quarterly 51:102-14.
78
Allan H. Church
Bevis, J. C. 1948. "Economical Incentive Used for Mail Questionnaire." Public
Opinion Quarterly 12:492-93.
Cliff, N. 1987. Analyzing Multivariate Data. San Diego, CA: Harcourt Brace
Jovanovich.
Cohen, J. 1977. Statistical Power Analysis for the Behavioral Sciences. 2d ed. New
York: Academic Press.
Consumer Price Index. 1974. Economic Report of the President. Transmitted to the
Congress January 1974, together with the Annual Report of the Council of
Economic Advisers. Washington, DC: Government Printing Office.
Consumer Price Index. 1989. Economic Report of the President. Transmitted to the
Congress January 1989, together with the Annual Report of the Council of
Economic Advisers. Washington, DC: Government Printing Office.
Cox, E. P. 1976. "A Cost/Benefit View of Prepaid Monetary Incentives in Mail
Questionnaires." Public Opinion Quarterly 40:101-4.
Dillman, D. A. 1978. Mail and Telephone Surveys: The Total Design Method. New
York: Wiley.
Dohrenwend, B. S. 1970. "An Experimental Study of Payments to Respondents."
Public Opinion Quarterly 34:620-24.
Eichner, K., and W. Habermehl. 1981. "Predicting Response Rates to Mailed
Questionnaires: Comment on Heberlein and Baumgartner, ASR, August, 1978."
American Sociological Review 43:361-63.
Eisinger, R. A., W. P. Janicki, R. L. Stevenson, and W. L. Thompson. 1974.
"Increasing Returns in International Mail Surveys." Public Opinion Quarterly
38:125-30.
Fox, R. J., M. R. Crask, and J. Kim. 1988. "Mail Survey Response Rate: A
Meta-analysis of Selected Techniques for Inducing Response." Public Opinion
Quarterly 52:467-91.
Gelb, B. D. 1975. "Incentives to Increase Survey Returns: Social Class
Considerations." Journal of Marketing Research 12:107-9.
Goyder, J. C. 1982. "Further Evidence on Factors Affecting Response Rates to
Mailed Questionnaires." American Sociological Review 47:550-53.
Greenberg, A., and M. N. Manfield. 1957. "On the Reliability of Mail Questionnaires
in Product Tests." Journal of Marketing 231:342-45.
Groves, R. M. 1987. "Research on Survey Data Quality." Public Opinion Quarterly
51:S156-S172.
Gunn, W. J., and I. N. Rhodes. 1981. "Physician Response Rates to a Telephone
Survey: Effects of Monetary Incentive Level." Public Opinion Quarterly
45:109-15.
Guzzo, R. A., R. D. Jette, and R. A. Katzell. 1985. "The Effects of Psychologically
Based Intervention Programs on Worker Productivity: A Meta-Analysis."
Personnel Psychology 38:275-91.
Hackler, J. C., and P. Bourgette. 1973. "Dollars, Dissonance, and Survey Results."
Public Opinion Quarterly 37:276-81.
Heberlein, T. A., and R. Baumgartner. 1978. "Factors Affecting Response Rates to
Mailed Questionnaires: A Quantitative Analysis of the Published Literature."
American Sociological Review 43:447-62.
Hedges, L. V., and I. Olkin. 1985. Statistical Methods for Meta-Analysis. Orlando,
FL: Academic Press.
Houston, M. J., and N. M. Ford. 1976. "Broadening the Scope of Methodological
Research on Mail Surveys." Journal of Marketing Research 13:397-403.
Howell, D. C. 1982. Statistical Methods for Psychology. Boston: Duxbury.
Hunter, J. E., and F. L. Schmidt. 1990. Methods of Meta-Analysis: Correcting Error
and Bias in Research Findings. Newbury Park, CA: Sage.
Hunter, J. E., F. L. Schmidt, and G. B. Jackson. 1982. Meta-Analysis: Cumulating
Research Findings across Studies: Studying Organizations. Innovations in
Methodology, vol 4. Beverly Hills, CA: Sage.
Jones, W. H. 1979. "Generalizing Mail Survey Inducement Methods: Population
Incentives in Mail Surveys: A Meta-Analysis
79
Interactions with Anonymity and Sponsorship." Public Opinion Quarterly
43:102-11.
Jones, W.H., and J. R. Lang. 1980. "Sample Composition Bias and Response Rates
in a Mail Survey: A Comparison of Inducement Methods." Journal of Marketing
Research 17:69-76.
Kanuk, L., and C. Berenson. 1975. "Mail Surveys and Response Rates: A Literature
Review." Journal of Marketing Research 12:440-53.
Kephart, W. M., and M. Bressler. 1958. "Increasing the Response to Mail
Questionnaires: A Research Study." Public Opinion Quarterly 22:122-32.
Knox, J. B. 1951. "Maximizing Responses to Mail Questionnaires: A New
Technique." Public Opinion Quarterly 51:366-67.
Levine, S., and G. Gordon. 1958. "Maximizing Returns on Mail Questionnaires."
Public Opinion Quarterly 22:568-75.
Linsky, A. S. 1975. "Stimulating Responses to Mailed Questionnaires: A Review."
Public Opinion Quarterly 39:82-101.
Lockhart, D. C. 1984. Making Effective Use of Mailed Questionnaires. New
Directions for Program Evaluation: A Publication of the Evaluation Research
Society, ed. Ernest R. House, no. 21. San Francisco: Jossey-Bass.
Peterson, R. A. 1975. "Mail-Survey Responses." Journal of Business Research
3:198-210.
Robinson, R. A., and P. Agisim. 1951. "Making Mail Surveys More Reliable."
Journal of Marketing 15:415-24.
SAS Institute. 1985. SAS User's Guide: Statistics, Version 5 Edition. Cary, NC: SAS
Institute.
Shosteck, H., and W. R. Fairweather. 1979. "Physician Response Rates to Mail and
Personal Interview Surveys." Public Opinion Quarterly 43:207-17.
Sudman, S., and R. Ferber. 1974. "A Comparison of Alternate Procedures for
Collecting Consumer Expenditure Data for Frequently Purchased Products."
Journal of Marketing Research 11:128-35.
Tabachnick, B. G., and L. S. Fidell. 1989. Using Multivariate Statistics. 2d ed. New
York: Harper & Row.
Wiseman, F. 1973. "Factor Interaction Effects in Mail Survey Response Rates."
Journal of Marketing Research 10:330-33.
Wolfe, A. C., and B. R. Treiman. 1979. "Postage Types and Response Rates in Mail
Surveys." Journal of Advertising Research 19:43-48.
Yu, J., and H. Cooper. 1983. "A Quantitative Review of Research Design Effects on
Response Rates to Questionnaires." Journal of Market Research 20:36-44.
Zusman, B. J., and P. Duby. 1987. "An Evaluation of the Use of Monetary
Incentives in Postsecondary Survey Research." Journal of Research and
Development 20:73-78.
File Type | application/pdf |
File Title | Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis |
File Modified | 2011-09-26 |
File Created | 2008-05-22 |