Supporting Statement B_4 21 2014

Supporting Statement B_4 21 2014.doc

Colorectal Cancer Screening Survey

OMB: 0920-1023

Document [doc]
Download: doc | pdf


Colorectal Cancer Screening Survey


New



Supporting Statement

Part B: Statistical Methods



April 21, 2014



Point of Contact:

Florence Tangka, PhD

Division of Cancer Prevention and Control

Centers for Disease Control and Prevention

Atlanta, Georgia

Telephone: (770) 488-1183

E-mail: FBT9@CDC.GOV



TABLE OF Contents

Section Page

B. Collections of Information Employing Statistical Methods B-1

B.1 Respondent Universe and Sampling Methods B-1

B.2 Procedures for the Collection of Information B-3

B.2.1 Estimation Procedure B-4

B.2.2 Estimating Willingness-to-Pay (WTP) B-5

B.2.3 Conditional Logit Estimation B-5

B.2.4 Mixed Logit Estimation B-7

B.3 Methods to Maximize Response Rates and Deal with Nonresponse B-8

B.4 Tests of Procedures or Methods to be Undertaken B-11

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data B-12

References B-14



Exhibits

Number Page

Exhibit B-1. Sample Size and Response Rates for Pretest and Final Data Collection B-2




LIST OF ATTACHMENTS

Attachment 1: Public Health Service Act

Attachment 2: Overview of Survey and Screen Shots

Attachment 3: Experimental Design and 10 Blocks of DCE Questions

Attachment 4: Federal Register Notice

Attachment 5: Summary of Public Comments and CDC Response

Attachment 6: RTI Institutional Review Board Approval

Attachment 7: KN’s Privacy Statement

Attachment 8: Invitation Email for Respondents

B. Collections of Information Employing Statistical Methods

B.1 Respondent Universe and Sampling Methods

The respondent universe for this study is men and women aged 50 through 75 living in the United States. This respondent universe consists of potentially 235 million U.S. adults. The survey sample of 2,900 (for the pretest and final survey combined) will be drawn by GfK Knowledge Networks (KN) from its national panel, the KnowledgePanel®. KN is a subcontractor to RTI International, which is the Centers for Disease Control and Prevention’s (CDC’s) contractor for this study. The current KnowledgePanel consists of more than 45,000 adults who complete a few surveys per month while they remain in the KnowledgePanel. KN’s panel is scientifically recruited and maintained to track closely to the U.S. population in terms of age, race, Hispanic ethnicity, geographical region, employment status, and other demographic elements. The KnowledgePanel has been previously approved by the Office of Management and Budget (OMB) for many public health research applications.

  • KN utilizes address-based sampling (ABS) for its panel recruitment. When KnowledgePanel® began over 10 years ago, panelists were recruited via random digit dialing (RDD) telephone surveys. At the time, RDD samples allowed access to over 90% of U.S. households. This is no longer the case due to marked declines in landline households, dramatic increases in cell-only households, the use of caller ID devices and call screening, answering machines, and do-not-call lists. Hence, a change was made in 2009 to begin recruiting entirely with the U.S. Postal Service’s Delivery Sequence File, which provides coverage to 97% of U.S. households. Under this recruitment procedure, randomly sampled addresses are invited to join KnowledgePanel® through a series of mailings and, in some cases, telephone follow-up calls to non-responders when a telephone number can be matched to the sampled address. Operationally, households invited to participate in the KnowledgePanel® have the option to join the panel one of several ways: (1) completing and returning a paper form in a postage-paid envelop; (2) calling a toll-free hotline maintained by KN; or (3) going to a dedicated Website and completing an online recruitment form. Once these recruitment procedures are completed, invited participants become empaneled and are available to begin participating in specific online surveys. All KN panelists complete their surveys online.

  • Households are provided with access to the Internet and hardware if needed (free Netbook laptop and free internet service). Thus unlike Internet convenience panels, also known as “opt-in” panels, that include only individuals with Internet access who volunteer themselves for research, KnowledgePanel recruitment covers households with and without Internet access.

  • The address-based sampling provides coverage to cell-phone only households.

.



KN maintains basic demographic data on this population and, for this study, will limit survey invitations to persons between the ages of 50 and 75. Panel demographic information will be used by KN to generate survey weights and design variables for statistical adjustments for nonresponse in data analysis.

Exhibit B-1 shows the expected response rates (completion rates) for the pretest and final surveys from KN panel members. The completion rate of 70% is a conservative (low) estimate based on previous experience with KN surveys. Note that the total response rate, which includes attrition during recruitment to the KnowledgePanel and attrition from the panel over time, is lower than the completion rate for an individual survey.

Exhibit B-1. Sample Size and Response Rates for Pretest and Final Data Collection

Description

Target Sample
Size

Estimated
Response (Completion) Rate

Sampled Units

Pretest

30

70%

43

Final Survey

2,000

70%

2,857

Total



2,900



Our primary interest is in estimating the preference parameters of the discrete choice experiment (DCE) model, and the study is powered to estimate a separate model for each of the three subsamples that receive the three information treatments. Sample-size calculations represent a challenge in choice experiments such as this one. There is no agreed on method to determine the needed sample size. Most published discrete choice experiment (DCE) studies in health have a sample size between 100 and 300 respondents (Marshall et al., 2010). However, minimum sample size depends on a number of criteria, including the question format, the complexity of the choice task, the desired precision of the results, and the need to conduct subgroup analyses (Johnson et al., 2012; Louviere et al., 2000). Sample size needs also depend on how heterogeneous preferences are across the sample over the attributes and range of levels included in the survey (the more heterogeneous preferences are for different attributes and levels, the larger the sample size needed).

Orme (2010) provides an equation to calculate the minimum sample size for a main effects design based on the number of tasks (choice questions) each respondent completes (5 in this survey), the number of alternatives in each task (two in this survey), and the maximum number of levels for any one attribute (5 in this survey).1 Based on this equation, if we include five tasks per respondent, with two choices and a maximum of five levels for any one attribute, we need a sample size of approximately 250 to 500 respondents to estimate the main effects (parameters for each attribute level).

With a sample size of 2,000 divided randomly between the three information treatments, we will have 667 respondents per treatment. This sample size should allow us to estimate a separate model for each information treatment and estimate parameters for each attribute level.

B.2 Procedures for the Collection of Information

Survey invitations will be sent by KN to a sample of U.S. adults between the ages of 50 and 75 from their KnowledgePanel (see e-mail invitation in Attachment 8). A respondent’s initial log-in directs to an Institutional Review Board (IRB)-approved online consent form (see Attachment 6 for IRB approval and Attachment 2 pages 4-6 for consent form), which provides general information about the study and any possible risks. To participate in the study, respondents must click a box to indicate that they have read the information and that they voluntarily consent to participate in the study; otherwise, they may decline to participate by clicking a “do not consent” box or by simply closing the consent screen.

The surveys will be self-administered and accessible any time of day for a designated period. For both the pretest and the full survey, participants are permitted to complete the survey only once, since each respondent has a unique code.

KN will begin fielding with a pretest in which a small number of survey invitations are sent to obtain roughly 30 completed observations. Immediately after the pretest is completed, KN will send an encrypted, de-identified data file to RTI for preliminary analysis as an additional quality control check. Any issues identified for correction by CDC or RTI will be adjusted before KN begins the full fielding. After fielding, the final data file is generated following strict quality control procedures at KN, review by multiple supervisors at KN, and a random case-level check to ensure proper merging and formatting. Again, KN will de-identify and encrypt the data before final delivery to RTI.

We estimate that 43 people must be invited to yield 30 respondents for the pretest (assuming a conservative 70% response rate). We estimate that 2,857 people must be invited to yield 2,000 respondents for the final survey (assuming a conservative 70% response rate).

B.2.1 Estimation Procedure

The survey contains a series of stated preference (SP) DCE questions. The DCE questions ask the respondent to select the colorectal cancer (CRC) screening test they prefer between two alternative screening tests that have been designed following methods in Flynn et al. (2010), Hauber et al. (2010), Bijlenga et al. (2009), and Ratcliffe et al. (2009). The hypothetical screening tests in the DCE questions are defined by the screening test attributes introduced earlier in the survey. A D-efficient, fractional factorial, orthogonal design was created using NGene (ChoiceMetrics, 2012). The final design, contained in Attachment 3, contains 10 blocks of 5 DCE questions. Respondents are randomly assigned to one of the 10 blocks of questions.

After the survey data collection is completed, the data will be cleaned, coded, and edited. All data collected for this study will be weighted for analysis. Weights for the KN sample are calculated using a standard post-stratification weighting procedure that adjusts for survey nonresponse as well as noncoverage. This weighting procedure also applies a standard post-stratification adjustment based on demographic distributions from the most recent data from the Current Population Survey (CPS). Benchmark distributions for Internet access used in this weight are obtained from the most recent special CPS supplemental survey measuring Internet access.

We will generate statistics to summarize responses for the sample as a whole and by individual characteristics, such as age, whether they have had a screening test for CRC in the past, and which information treatment they received (no additional information or one of the two fact sheets).

B.2.2 Estimating Willingness-to-Pay (WTP)

We will use the DCE data to estimate the preferences for screening tests and changes in screening test attributes. To analyze the data from the DCE questions, we will apply a random utility modeling (RUM) framework, which is commonly used to model discrete choice decisions in SP studies. The RUM framework assumes that survey respondents implicitly assign utility to each choice option presented to them. This utility can be expressed as

,

where Uij is individual i’s utility for a choice option (i.e., screening test) j. V() is the nonstochastic part of utility, a function of Xij, which represents a vector of attribute levels for the option j (including its cost) presented to the respondent; Zi is a vector of personal characteristics; and i is a vector of attribute-specific preference parameters. eij is a stochastic term, which captures elements of the choice option that affect individuals’ utility but are not observable to the analyst. On each choice occasion, respondents are assumed to select the option that provides the highest level of utility. By presenting respondents with a series of choice tasks and options with different values of Xij, the resulting choices reveal information about the preference parameter vector.

For the initial and most basic analysis, we assume the following form for utility:

,

where yi is a measure of respondents’ household income, and Cij is the cost of option j to respondent i (in this formulation, the cost attribute is separated from the other attributes in Xij). The parameter vector, , is assumed to be the same for all respondents and includes two main components: 1, the vector of marginal utilities associated with each attribute in Xij, and 2, the marginal utility of income.

B.2.3 Conditional Logit Estimation

To estimate the parameters of this simple model, we will use a standard conditional logit (CL) model (McFadden, 1984), which assumes the disturbance term follows a Type I extreme-value error structure and uses maximum-likelihood methods to estimate 1 and 2. One of the well-recognized limitations of the CL model is the assumed property of Independence of Irrelevant Alternative (IIA), which often implies unrealistic substitution patterns between options, particularly those that are relatively similar (McFadden, 1984); nevertheless, it is a computationally straightforward estimation approach that can provide useful insights into the general pattern of respondents’ preferences, trade-offs, and values.

The parameter estimates from the CL model will then be used to estimate the average marginal willingness to pay (MWTP) value of each noncost attribute:

,

where k refers to the kth element of the X and 1 vectors. They will also be used to estimate the average WTP for acquiring the combination of attributes associated with one test (X1) compared to the attributes of another test (X0):

.

The standard errors and confidence intervals for these value estimates will be estimated using the delta method (Greene, 2003) or the Krinsky and Robb (1986) simulation method.

The analysis will also need to account for the direct preference effect associated with selecting “no test” in the follow-up question after each DCE question. To account for the preference effect of selecting a test versus no test in the follow-up question, the analysis will include an alternative specific constant for the no test alternative.

The respondents will randomly receive one of three information treatments (no additional information or one of two CRC screening fact sheets). To examine, test for, and estimate differences in preferences across the three groups, we will estimate both separate and pooled models for subsamples of the data and test the restrictions of the pooled models using log-likelihood ratio tests. We will also estimate varying parameter models by interacting the attribute vector (Xij) with dummy variables for the treatment effects. In addition, we will look at differences in preferences based on elements of the respondent characteristics vector (Zi) by interacting the respondent characteristics with the attribute vector. The parameter estimates from the interaction terms will allow us to examine whether and how the marginal values associated with test attributes vary systematically with respect to the information treatment and respondent characteristics.

Based on the findings of these pooling tests and varying parameter models, we will determine whether and how WTP for the CRC screening tests described in our survey varies across the population according to information treatment and other respondent characteristics (for example, sociodemographic characteristics, risk perceptions, or experience with CRC screening tests). We will use the model results to predict average WTP for different subgroups and to demonstrate how benefits of different screening tests are distributed across different subsectors of the population.

B.2.4 Mixed Logit Estimation

In addition to analyses using CL, we will estimate mixed logit (ML) models (Revelt & Train, 1998). Although these models are somewhat more complex, they offer several advantages. First, in contrast to the CL, ML is not subject to the restrictive IIA assumption. Second, ML specifically accounts for unobserved heterogeneity in tastes across subjects. It introduces subject-specific stochastic components for each , as follows:

,

where ηi is a stochastic component of preferences that varies across respondents according to an assumed probability distribution. Third, it can be used to capture within-subject correlation in responses (i.e., panel structured data), which is important for DCE surveys that involve multiple choice tasks per respondent (as in this study).

The main difference in the output of ML models compared with CL models is that ML provides the ability to characterize the unobserved heterogeneity in respondents’ preferences. This can be especially important if we believe there are differences in how different people trade off attributes of the tests being evaluated. The statistical model allows the model parameters (element of the vector) to have a stochastic component. The standard deviation estimates can be interpreted as measures of attribute-specific preference heterogeneity. As a result, the Revelt and Train methodology allows for the development of estimates for both the mean and standard deviation for each of the parameters considered random.

When applying ML models to estimate WTP, one must make additional judgments regarding model specification (Balcombe, Chalak, & Fraser, 2009), including the following:

  • Which coefficients should be assumed to be fixed or randomly distributed?

  • What statistical distribution(s) should be used for the random parameters?

  • Should the model be estimated in “utility space” or “WTP space”?

In addition, there are two main approaches to estimating ML models: simulation-based maximum likelihood estimation and Bayesian (i.e., Hierarchical Bayes [HB]) estimation. In general, the two methods have equivalent asymptotic properties, but they use different estimation procedures that offer advantages and disadvantages for addressing the specification issues described above. The two estimation procedures are discussed in-depth in (Train, 2001) and Huber and Train (2001).

The analytical expressions for WTP involve ratios of coefficients, which can be problematic, leading to unstable or implausible WTP distributions when both the numerator and denominator are assumed to be randomly distributed. One approach that has often been used to address this issue is to assume that the income/cost parameter (2) is fixed. An alternative approach is to estimate the model in “WTP space”:

,

where λi = β2ii, ωi = β1i/ β2i, εij=eiji such that μi is the scale parameter, and ωi is the vector of marginal WTP values for the attributes in X (Scarpa, Thiene, & Train, 2008). In this framework, one can begin by directly specifying the distributions of MWTP (ωi) and λi; however, the result is a model that is nonlinear in utility parameters. One advantage of HB estimation is that this type of nonlinearity is much easier to accommodate than in classical ML estimation. For this project, we will evaluate the ML approaches and use one or more in the analysis.

B.3 Methods to Maximize Response Rates and Deal with Nonresponse

The research survey is designed to help CDC understand factors that might be preventing people from getting screened for CRC using a systematic approach that will generate weights for the rate at which individuals trade off different features against each other. The results will help CDC design interventions to increase screening. The results from the survey will not be used for regulatory analysis or to draw conclusions about the preferences of adults aged 50 to 75 in the general population. We will take careful steps to limit nonresponse bias and to analyze our data for evidence of nonresponse bias, which will be reported in any write-up of the results; however, we do not intend to generalize the results beyond the sample.

The survey and the data collection methods have been designed by CDC, RTI, and KN to minimize nonresponse bias. We fully anticipate meeting or exceeding a 70% response rate for this study, as described below. First, we describe methods to reduce nonresponse bias within the sample drawn by KN from their panel. Second, we will describe our efforts to measure and detect nonresponse bias within the KN panel, and how the extent of any nonresponse bias will be reported. Third, we provide estimates of comparable response rates drawn from a similar sampling frame on health topics. Finally, another source of nonresponse bias in terms of joining the KN panel should also be considered. Below we describe our approach to assessing this form of nonresponse bias.

Methods to reduce nonresponse bias within the sample due to the survey design and administration. The following steps have been undertaken in the survey and sampling design to minimize nonresponse bias and to ensure high response rates:

  • Pretesting. The survey has been carefully designed and pretested to ensure the best possible respondent experience. The research team has extensively evaluated the survey to improve the questionnaire and the online survey experience. The survey was also pretested with 9 individuals from the general public in May 2013, and additional edits were made to further improve the survey and maximize response rates.

  • Limited length. The research team has scrutinized all to reduce respondent burden and maximize response and completion rates.

  • Reminders. Two e-mail reminders will be sent to nonresponders a few days after the initial survey invitation.

  • Toll-free numbers. KN will provide toll-free telephone numbers in the survey invitation and welcome screen for potential or enrolled respondents to call with any questions or concerns about any aspect of the study. RTI will also provide a toll-free telephone number for participants who have any questions about the study or their rights as a study participant.

  • KN’s national panel has very high completion rates on average. The sample will be drawn by KN from its standing national panel (KnowledgePanel). As outlined above, these approximately 45,000 adults have previously been contacted regarding ongoing participation in studies performed by KN. All individuals complete a few surveys per month while they remain in the KnowledgePanel. Thus, they expect survey invitations with some frequency and respond at very high rates on average.

  • KnowledgePanel® utilizes an unbiased general topic recruitment protocol that is free of self-selection biases related to pre-existing interests in specific research topics.



Methods to detect and report on nonresponse bias based on nonresponse by invited respondents. Our first priority in dealing with nonresponse is to minimize it from occurring by ensuring high response rates using the methods described above. However, some non-response is likely unavoidable despite the best efforts of any survey methodologist, so we briefly describe our efforts to identify and report its potential impact on our results.

KN maintains a range of “profile” data for all of its panel members on topics including—and going beyond—CPS questions. Thus, we have much more information about nonrespondents than would be the case if the entire sampling frame were new for this study. To analyze this, KN’s deliverable to CDC and RTI will include profile data for all sampled individuals (nonrespondents and respondents). The following specific elements will be provided:

  • Date and time survey started, completed, and total duration in minutes

  • Age (integer)

  • Education (highest degree received, 14 categories)

  • Race and Hispanic ethnicity

  • Gender

  • Household head status of respondent (yes/no)

  • Household size (integer, number of members)

  • Housing type (5 categories)

  • Household income (19 categories)

  • Marital status (6 categories)

  • Metropolitan statistical area (MSA) status (2 categories: MSA/metro or non-MSA/metro)

  • Ownership status of living quarters (3 categories)

  • State

  • Current employment status (7 categories)

  • Internet access (KN provided, yes/no)

After data collection, we will conduct and report descriptive analyses of these variables and basic statistical tests of differences in proportions and means (e.g., chi-square tests, t-tests). These will be included in the final report to CDC and in appendices to academic research papers that are submitted for publication.

Methods to detect and report on nonresponse bias from recruitment of KnowledgePanel. Another source of nonresponse bias is when individuals who are recruited by KN for the KnowledgePanel do not join the panel. Therefore, to compare respondents from this study to the general population—including non-KN panel members—we will benchmark our survey estimates by comparing to measures from the National Health Interview Survey (NHIS). As described in Section A.2, this survey contains questions from the NHIS. We will compare the responses from this survey to the responses from similar questions in the NHIS. A limitation of this approach is that the NHIS is from an earlier time period and is collected by telephone. As a result, data differences observed from our study and the NHIS could be the result of time shifts or mode differences (Dennis, 2010; Smith & Dennis, 2005). Second, to assess broader representation of the responding sample relative to U.S. population characteristics, we will compare distributions of the same variables with those in the most recent CPS.

B.4 Tests of Procedures or Methods to be Undertaken

The survey for this study has been developed through several steps, as described in Supporting Statement A. We began with a literature review to identify attributes affecting individuals’ willingness to obtain a CRC screening. After this, the draft instrument and all methods were reviewed with additional experts at RTI, CDC, and Dr. Derek Brown (Washington University in St. Louis).

In addition, revised survey materials were pretested by RTI staff in a guided cognitive interview format in May 2013 with 9 adults from the general public in Raleigh and Durham, North Carolina. During the cognitive interviews, respondents completed the survey in the presence of a trained interviewer. The interviewer used a semi-structured protocol of questions with standardized probes and follow-up questions to guide the interview. Respondents were recruited by L&E research. They ranged in age between 50 and 75, with a mix of genders, ages, races and education. Five of the respondents had undergone a colonoscopy in the past and 4 had never had a colonoscopy. Select revisions to the survey were made following the cognitive interviews, and the final instrument was reviewed again by RTI, CDC, and Dr. Brown.

RTI will conduct rigorous testing of the online survey instrument prior to its fielding. RTI researchers will have access to an online test version of the instrument that we will use to verify that instrument skip patterns are functioning properly and that all survey questions are accurately worded using the instrument approved by OMB. KN will conduct a pretest of the survey with 30 respondents to make sure the survey programming is working, and RTI will review the data to ensure that the responses seem reasonable and the questions are working as desired.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The data collection methodology for this exploratory study was designed by Dr. Carol Mansfield of RTI International, Dr. Derek Brown of Washington University in St. Louis, and staff from CDC, including Dr. Florence Tangka and Dr. Donatus Ekwueme. Data analysis will be performed by RTI International under the direction of Dr. Carol Mansfield and our consultant, Dr. Derek Brown.

Carol Mansfield, PhD

Senior Economist,

RTI International

3040 E. Cornwallis Road

P.O. Box 12194

Research Triangle Park, NC 27709-2194

Phone: 919-541-8053

E-mail: carolm@rti.org


Derek S. Brown, PhD

Assistant Professor, Brown School

Washington University in St. Louis

One Brookings Drive

Campus Box 1196, Brown Hall, Room 116

St. Louis, MO 63130

Phone: 314-935-8651

E-mail: dbrown@brownschool.wustl.edu


Florence Tangka, PhD
Centers for Disease Control and Prevention
DCPC/EARB
4770 Buford Highway NE, MS F-76
Atlanta, GA 30341-3717
Phone: 770-488-1183
Fax: 770-488-4286
E-mail: ftangka@cdc.gov


Donatus (Don) U. Ekwueme, PhD

Centers for Disease Control and Prevention
DCPC/EARB
4770 Buford Highway NE, MS K-55
Atlanta, GA 30341-3717
Phone: 770-488-3182
Fax: 770-488-4639
E-mail: ftangka@cdc.gov





References

Balcombe, K., Chalak, A., & Fraser, I. (2009). Model selection for the mixed logit with Bayesian estimation. Journal of Environmental Economics and Management, 57(2), 226–237.

Bijlenga, D., Birnie, E., & Bonsel. G. J. (2009). Feasibility, reliability, and validity of three health-state valuation methods using multiple-outcome vignettes on moderate-risk pregnancy at term. Value Health, 12(5), 821–827.Dennis, J. M. (2010, March). KnowledgePanel®: Processes & procedures contributing to sample representativeness & tests for self-selection bias. Knowledge Networks working paper. http://www.knowledgenetworks.com/ganp/docs/KnowledgePanelR-Statistical-Methods-Note.pdf

Greene, W. H. (2003). Econometric Analysis. Upper Saddle River, NJ: Prentice Hall.

Flynn, T. N. (2010). Using conjoint analysis and choice experiments to estimate QALY values: issues to consider. Pharmacoeconomics, 28(9), 711–722.

Hauber, A. B., Mohamed, A. F., Johnson, F. R., Oyelowo, O., Curtis, B. H., & Coon, C. (2010). Estimating importance weights for the IWQOL-Lite using conjoint analysis. Quality of Life Research, 19(5), 701–709.

Huber, J., & Train, K. (2001). On the similarity of classical and Bayesian estimates of individual mean partworths. Marketing Letters, 12(3), 259–269.

Johnson, F. R., Kanninen, B., Bingham, M., & Özdemir, S. (2007). Experimental design for stated-choice studies. In B. J. Kanninen (Ed.), Valuing Environmental Amenities Using Stated Choice Studies (pp. 159–202). Dordrecht: Springer.

Johnson, F. R., Yang, J. –C., & Mohamed, A. F. (2012, March). In defense of imperfect experimental designs; statistical efficiency and measurement error in choice-format discrete-choice experiment. Proceedings of the Sawtooth Software Conference, 195–205.

Krinsky, I., & Robb, A. L. (1986). On approximating the statistical properties of elasticities. The Review of Economics and Statistics, 68(4), 715–719.

Louviere, J. J., Hensher, D. A., & Swait, J. D. (2000). Stated choice methods: Analysis and applications. New York, NY: Cambridge University Press.

Marshall, D., Bridges, J. F. P., Hauber, A. B., Cameron, R., Donnalley, L., Fyie, K., et al. (2010). Discrete-choice experiment applications in health—how are studies being designed and reported? An update on current practice in the published literature between 2005 and 2008. Patient, 3(4), 249–256.

McFadden, D. (1984). Econometric analysis of qualitative response models. Handbook of Econometrics, 2, 1395–1457.

Orme, B. (2010). Getting started with conjoint analysis: Strategies for product design and pricing research (2nd Ed.). Madison, WI: Research Publishers LLC.

Ratcliffe, J., Brazier, J., Tsuchiya, A., Symonds, T., & Brown, M. (2009). Using DCE and ranking data to estimate cardinal values for health states for deriving a preference-based single index from the sexual quality of life questionnaire. Health Economics, 18(11), 1261–1276.

Revelt, D., & Train, K. (1998). Mixed logit with repeated choices of appliance efficiency levels. Review of Economics and Statistics, 80(4), 647–657.

Scarpa, R., Thiene, M., & Train, K. (2008). Utility in willingness to pay space: A tool to address confounding random scale effects in destination choice to the Alps. American Journal of Agricultural Economics, 90(4), 994–1010.

Smith, T. W., & Dennis, J. M. (2005, December). Online versus in-person: Experiments with mode, format, and question wordings. Public Opinion Pros. Retrieved from http://www.publicopinionpros.norc.org/from_field/2005/dec/smith.asp

Train, K. (2001). A comparison of hierarchical Bayes and maximum simulated likelihood for mixed logit. Working paper. Berkeley, CA: University of California.

Ware, J. E., Jr., & Sherbourne, C. D. (1992). The MOS 36-item short-form healthy survey (SF-36). Conceptual framework and item selection. Medical Care, 30(6), 473–483.

1 The sample size equation from Orme (2010) for a main effects design is (number of levels)*500/(number of alternatives * number of tasks) or, more conservatively, (number of levels)*1,000/(number of alternatives * number of tasks).

_________________________________
RTI International is a trade name of Research Triangle Institute.

File Typeapplication/msword
AuthorDiVito, Norma
Last Modified ByCDC User
File Modified2014-04-21
File Created2014-04-21

© 2024 OMB.report | Privacy Policy