Colorectal Cancer Screening Survey
New
Supporting Statement
Part A: Justification
May 1, 2014
Point of Contact:
Florence Tangka, PhD
Division of Cancer Prevention and Control
Centers for Disease Control and Prevention
Atlanta, Georgia
Telephone (770) 488-1183
TABLE OF CONTENTS
Section Page
A.1 Circumstances Making the Collection of Information Necessary 1
A.1.1 Privacy Impact Assessment 6
A.2 Purposes and Use of the Information Collection 10
A.2.1 Privacy Impact Assessment Information 11
A.3 Use of Improved Information Technology and Burden Reduction 12
A.4 Efforts to Identify Duplication and Use of Similar Information 13
A.5 Impact on Small Businesses or Other Small Entities 16
A.6 Consequences of Collecting the Information Less Frequently 17
A.7 Special Circumstances Relating to the Guidelines of 5 CFR 1320 17
A.8 Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency 17
A.9 Explanation of Any Payment or Gift to Respondents 18
A.10 Assurance of Confidentiality Provided to Respondents 19
A.10.1 Privacy Impact Assessment Information 19
A.11 Justification for Sensitive Questions 20
A.12 Estimates of Annualized Burden Hours and Costs 20
A.13 Estimates of Other Total Annual Cost Burden to Respondents or Recordkeepers 22
A.14 Annualized Cost to the Federal Government 22
A.15 Explanation for Program Changes or Adjustments 23
A.16 Plans for Tabulation and Publication and Project Time Schedule 23
A.17 Reason(s) Display of OMB Expiration Date Is Inappropriate 24
A.18 Exceptions to Certification for Paperwork Reduction Act Submissions 24
LIST OF ATTACHMENTS
Attachment 1: Public Health Service Act
Attachment 2: Overview of Survey and Screen Shots
Attachment 3: Experimental Design and 10 Blocks of DCE Questions
Attachment 4: Federal Register Notice
Attachment 5: Summary of Public Comments and CDC Response
Attachment 6: RTI Institutional Review Board Approval
Attachment 7: KN’s Privacy Statement
Attachment 8: Invitation Email for Respondents
The Centers for Disease Control (CDC) is requesting OMB approval for a new information collection to quantitatively assess consumer preferences for colorectal cancer (CRC) screening procedures and factors that affect those preferences. The information collection is designed as a stated-preference survey using the discrete choice experiment (DCE) approach (also known as a conjoint survey). Respondents will be asked to choose between hypothetical CRC screening tests that have different characteristics, called “attributes.” Each respondent will be asked to make 5 “CRC test A versus CRC test B” choices in which the attributes of test A and the attributes of test B vary. The choices presented to respondents are designed to elucidate the relative importance of the attributes in consumer decision making. Respondents will be men and women, ages 50-75 years of age, who are recommended by the U.S. Preventive Services Task Force to be screened for CRC. The study design will also allow CDC to examine how preferences for screening tests are affected by risk perceptions, real-life experience with CRC screening, and exposure to two information sheets on CRC screening developed as part of CDC’s Screen for Life program. Findings will help CDC improve CRC screening rates in the target population through the development, implementation, and dissemination of public health interventions and education materials that address specific consumer concerns and take their preferences into consideration. OMB approval is requested for one year.
A significant public health gap in terms of eliminating preventable deaths remains because of continued unhealthy behaviors (Kant et al., 2007; Kumanyika, 2005; IOM, 2005). This gap disproportionately affects low-income, minority, uninsured or under-insured populations and stems in part from a failure to receive basic clinical preventative services such as cancer screening, and identification of other risk factors such as obesity, physical inactivity, excessive alcohol consumption and tobacco use (Sambamoorthi and McAlpine, 2003). Social determinants also play a role: people need to have an environment that supports healthy choices. Additionally, the CDC-NCI-ACS report (Jemal et al., 2008) articulates a real challenge: large health gains due to the elimination of key risk factors such as smoking have been reduced among those who are able to change behavior or who have access to quality medical care. This suggests that incremental gains in the future are likely to be harder to achieve without an improved understanding of the multi-dimensional determinants of individual decision-making and behavior.
The Centers for Disease Control (CDC) is requesting OMB approval for a new information collection to quantitatively assess consumer preferences for colorectal cancer (CRC) screening procedures and factors that affect those preferences. The information collection is designed as a stated-preference survey using the discrete choice experiment (DCE) approach (also known as a conjoint survey).
CDC selected CRC screening for the study because CRC is the second leading cause of cancer related death in the U.S. (U.S. Cancer Statistics Working Group, 2013) and early screening can prevent deaths, but screening rates are low. The U.S. Preventive Services Task Force gives CRC screening for individuals ages 50 to 75 its strongest recommendation, indicating that based on evidence there is high certainty that the net benefit of CRC screening is substantial. However, as of 2012, only 65% of adults aged 50–75 years reported being up to date with CRC screening recommendation (Klabunde, et al., 2013).
One of the Healthy People 2020 objectives is to increase the proportion of eligible adults who receive CRC to 70.5 for eligible population by 2020 (Department of Health and Human Services, 2013). Effective CRC screening programs (including public health messages) are critical to bring the benefits of screening to the eligible population and reach national cancer prevention and control goals.
Different types of interventions have been developed, including patient and provider reminders, small media, and financial incentives that are effective in increasing cancer screening (Community Preventive Services Taskforce, 2012). Unfortunately, these interventions have not been widely used (Klabunde, et al, 2013). Factors such as trouble in selecting among effective tests, costs and complexity of the interventions may explain some of the reasons for limited dissemination. The challenge for public health is to identify approaches that might be effective in reaching members of the public, particularly those who do not respond to traditional public health messages and interventions designed to support healthy behaviors. An underutilized intervention does not solve the problem for which it was intended, potentially wastes the resources that went into its development and implementation, and has forgone benefits from alternative uses of these resources. To increase the effectiveness of a CRC screening program, the eligible population needs to be aware of the benefits of CRC screening services; the services must be available, easily accessible, affordable and acceptable to the target population (Pignone et al., 2014). One approach for informing decisions about potential ways to raise CRC screening rates is to elicit feedback from the target population about the features of potential CRC screening tests and what factors influence screening preferences.
A possible potential method for eliciting such feedback from the target population is via a stated-preference (SP) discrete choice experiment (DCE) survey, also known as a conjoint analysis survey. DCE assumes that CRC screening programs can be described in terms of their attributes or features. Analyzing the data from DCE surveys allows the estimation of quantitative weights that can be used to measure the strength of preferences for different attributes of screening programs. The results can be used to rank the attributes of screening tests in order of preference and to test the impact of respondent characteristics such as risk perceptions, experience with CRC screening, and exposure to information treatments on preferences for these attributes. By understanding people’s preferences for different attributes and what affects preferences, effective CRC screening programs--including effective communication messages-- can be developed and implemented to address consumer needs and preferences (Pignone et al., 2014; Viney et al., 2002). We identified 15 DCE experiments to understand eligible population preferences for CRC screening tests. The studies provided useful information in the design of the current survey, but they were not directly applicable to the CDC’s goal to identify public health strategies to promote the use of CRC screening tests and to develop public health messages.
CDC requests OMB approval to collect information from adults ages 50-75 years about their preferences and values regarding CRC screening tests. Information to be collected will include: measures of individuals’ preferences for different characteristics of screening tests; measures of the impact of different public health messages (two different fact sheets) on colon cancer screening test preferences; and measures of screening behavior, barriers to screening, and perceptions of CRC risk. The public health community needs more data on the factors that lead people not to get screened. Do people underestimate their risk? Do they think the tests are not accurate enough? Do people think that the screening test will be so uncomfortable that they refuse to get one? Are the costs of the tests, both in money and time, too high? Can simple public health messages affect people’s willingness to get screened? Information on these questions will help policy makers, health care professionals and public health officials better understand patient preferences and will help improve development, implementation and dissemination of effective programs (including effective education materials and communication strategies) to promote CRC screening.
Information will be collected from a sample of 2,000 eligible adults through a Web-based survey administered by GfK Knowledge Networks (KN). Respondents will be randomly selected from the KN KnowledgePanel®. CDC is authorized to conduct this information collection under the Public Health Service Act (41USC 241) Section 301 (Attachment 1).The Colorectal Cancer Screening Survey is based on a DCE survey. The survey instrument will consist of a fixed portion of core questions that are presented to all respondents, and two variable portions. The first variable portion is an information treatment where respondents will be randomly assigned to get no additional information or to view one of two information sheets on CRC screening developed as part of CDC’s Screen for Life program. The second variable portion will consist of 5 DCE questions that ask respondents to choose between different hypothetical CRC screening tests. Participants in previous similar studies have completed similar tasks (for example, see Marshall, McGregor and Currie, 2010 for a summary of a number of DCE surveys focusing on CRC screening). ). The hypothetical CRC screening tests are described by a set of attributes, and the levels of these attributes will be varied to create different hypothetical tests (see Exhibit A.1 for a summary of Attributes and Levels). The attribute levels were reviewed by clinical experts in CDC. The attribute levels for sensitivity are based on Zauber et al. (2008).
Because there are too many combinations of attribute levels to evaluate the entire set, DCE surveys use experimental design programs to create a limited set of questions that provide enough variation to estimate the weights respondents place on different attribute levels. We will use a standard experimental design program (NGene, ChoiceMetrics 2012) to create ten sets of 5 DCE questions. The 5 DCE questions constitute a variable portion of the survey instrument. The design will balance the number of times each level appears alone and with other attribute levels and there will be no dominated pairs (pairs where one test has better levels for all attributes). Note that the first two attributes will be presented on separate rows to make the information easier to read, however it will actually be a single attribute. The first level of “What can the test find?” will always be presented with the first level of “How often do you take the test?” and similarly for the second levels and the third levels.
Exhibit A.1. Attributes and Levels
Attribute |
Levels |
What can the test find?
|
|
How often do you take the test |
|
Can the test remove cancer and polyps? |
|
Preparation before the test |
|
Discomfort and activity limitations during and after the test |
|
Out of pocket cost to you per test |
|
Two randomizations are built into the experimental design:
1. Each respondent will be randomly assigned to one of the ten sets of 5 DCE questions (the variable portion of the survey instrument).
2. Each respondent will be randomly assigned to one of three information treatments:
I. a control group that receives no additional information about CRC screening;
II. a treatment group that receives a “No Excuses” educational flyer designed by the CDC Screen for Life program to dispel many common reasons for not getting a colonoscopy; or
III. a treatment group that receives a two-page Fact Sheet about CRC and screening options designed by the CDC Screen for Life program.
The first randomization will allow us to determine the rate at which respondents are willing to trade off one attribute for another and to rank the importance of the various attributes and changes in attribute levels. The second randomization will allow us to assess the impact of the “No Excuses” flyer and the Fact Sheet on respondent preferences.
CDC has contracted with RTI International in Research Triangle Park, NC, to conduct a self-administered, Web-based survey in conjunction with GfK KN KnowledgePanel® as a data collection partner (subcontractor to RTI International). Data will be maintained in de-identified form by RTI and CDC in accordance with IRB approvals for this study. The online survey sample of 2,000 respondents will be drawn from the general U.S. adult population on the KnowledgePanel® ages 50 to 75. A pretest of study procedures will be conducted with approximately 30 respondents prior to fielding the final survey.
The proposed experimental design will assess 3 information treatments and 10 blocks of DCE questions. Each respondent will be randomized to one information treatment and one block of DCE questions. As a result, the survey consists of fixed portions (in which the same information or question is presented to all respondents) and variable portions that depend on the respondent’s assignment to information treatment and block. Screen shots of an example survey are provided in Attachment 2, Overview of Survey and Screen Shots. The screen shot file is annotated with comments that explain which screen shots are part of the fixed instrument and which screen shots are part of the variable sections of the instrument. For demonstration purposes, the variable portion of Attachment 2 shows the complete set of DCE questions presented to respondents in Block 1. A summary of the questions applicable to respondents in Blocks 2-9 is provided in Attachment 3, Experimental Design and 10 Blocks of DCE Questions.
This section presents a description of the Colorectal Cancer Screening Survey, the risks and benefits and asks the respondent whether they agree to take the survey.
The text introduces CRC, including key risk factors. It asks participants a set of background questions regarding their own and any known relatives and acquaintances history of colon cancer and colonoscopies. The text on page 7 comes from http://www.cdc.gov/cancer/colorectal/basic_info/facts.htm and the image comes from http://digestive.niddk.nih.gov/ddiseases/pubs/virtualcolonoscopy/.
The questions will be used to help calculate objective and subjective perception of risk of CRC. The responses will be used to summarize information about the respondents and as variables in the analysis for the DCE questions.
Respondents are introduced to attributes of the hypothetical CRC screening tests. The attributes are: what can the test find and how often you should take the test, can the test remove cancer and polyps, preparation before the test, discomfort and activity limitations during and after the test, and cost of the test. Each of these attributes has multiple possible levels to them. Participants are asked a series of questions related to the attributes. The questions collect information that will be used to assess respondent’s perceptions of screening attributes, identify potential barriers to screening, and to break up the text to make the survey more interesting to the respondent. The responses will be used to summarize information about the respondents and as variables in the analysis of the DCE questions.
The survey will have three randomly assigned information treatments. A control group will get no additional information. Two treatment groups will get different information sheets informing people why they should get screening tests. The survey includes two different information sheets: “No Excuses” flyer and a Standard Fact Sheet. The “No Excuses” flyer dispels many common reasons why people fail to get a colonoscopy. The Standard Fact Sheet provides information on benefits associated with CRC screening, and risks of CRC.
The two information sheets were developed by CDC as part of the Screen for Life materials for public health awareness campaigns (http://www.cdc.gov/cancer/colorectal/sfl/print_materials.htm). The information treatments are included to test whether additional information about CRC and screening tests will make respondents more likely to say they will get tested in the future and whether the information will change their preferences over the attributes of CRC screening tests in the DCE questions.
The survey instrument includes 5 DCE questions (labeled Choice 1 to Choice 5 in Attachment 2). Respondents will be randomly assigned one of the ten sets of 5 DCE questions that present a choice between two hypothetical CRC screening tests. The tests are described by the attributes presented in Section 3 of the survey (see Exhibit A.1). The attribute levels will vary across respondents according to the experimental design. Attachment 3 contains the experimental design showing the attribute levels for all ten sets of 5 questions.
Before the DCE questions, we included a script emphasizing the potential for hypothetical bias, a bias that arises when people answer questions about hypothetical situations differently than if they faced an identical real choice. Such scripts, known as “cheap talk” scripts, have been found in some studies to reduce hypothetical bias in responses (for example, see Cummings and Taylor, 1999 and Landry and List, 2007).
The DCE questions ask the respondent to select their preferred test. After each question, there are 3 follow-up questions.
First, the respondent is asked whether they would get the test they preferred if it were the only test available.
Next, the respondent is asked how certain they are of their response. Past research suggests that highly certain responses are associated with lower hypothetical bias (see Champ et al., 1997, who used voluntary donations to an environmental good, and Blumenschein et al., 2008, who looked at a diabetes management program). The responses will be used to assess the sensitivity of the results to different assumptions about whether a respondent would really get the test or not based on their stated certainty.
Finally, if the preferred test is recommended for every year, respondents are reminded of the cost and asked whether they would really get the test every year. This question was added after pretesting suggested respondents were ignoring the recommended frequency was yearly and the cost was high.
The responses to the DCE questions will be analyzed to provide quantitative preference weights to rank the attributes and attribute levels and to test for significant differences between attribute levels. The weights can be used to rank tests with different characteristics and to predict the percent of the sample that would get a test with a specific set of characteristics. By interacting responses to other questions with the DCE responses, we can test whether respondent’s characteristics and opinions or the information treatments influence preferences for tests with different attributes and the probability of getting a test.
Respondents are asked questions about past CRC screening tests that have been recommended, which tests they got, and the reasons they have or have not received a different screening test. These questions will provide information on what tests doctors are recommending, what tests respondents are getting, their reported reasons for not getting tests and their perceptions of what it is like to get a colonoscopy. They will be used as variables in the analysis of the DCE questions and to help CDC understand factors affecting screening behavior.
As part of the survey, we will also collect revealed preference data on actual experiences with CRC screening to compare with the responses to the SP DCE questions. Participants who have had colonoscopies are asked to detail their experiences with a colonoscopy based on the attributes used in the DCE questions. Respondents who have not had colonoscopies are asked to use the attributes from the DCE questions to describe what they think the test will be like. The data on actual experiences and expected experiences can be analyzed jointly with the DCE data to look at how experience affects preferences.
The final section includes questions about health and health behaviors. These questions will be used to calculate an objective measure of the respondent’s risk of CRC cancer and provide information on overall health and access to medical care. The responses will be used to characterize the survey sample and as variables in the analysis of the DCE data.
The survey includes a number of questions that are also used in other surveys. Section A.4 contains a list of the questions that came from other surveys and the sources. The questions were included to provide external comparisons and broaden the usefulness of the survey. The questions from the CRC risk calculator were included to provide an objective measure of CRC risk for each respondent.
Target respondents are adults ages 50 to 75 years. KN’s Web-based survey will only be accessible to participants of the study. There is no Website content directed at children under 13 years of age.
The information from the Colorectal Cancer Screening Survey will help CDC, the wider public health community and health care providers better understand the preferences eligible populations have regarding CRC screening. Specifically, the survey will
Provide quantitative preference weights for the attributes of screening tests to assess how people view the attributes and tests and the rate at which they would be able to trade-off improvements in one attribute over another.
Understand how respondent’s objective CRC risk and their subjective perception of that risk affect their desire to get screened.
Understand how past screening behavior and perceptions of screening tests affect willingness to be screened and the weight they place on different attributes of screening tests.
Assess the impact of fact sheets on CRC screening developed by CDC as part of the Screen for Life program.
Identify barriers to screening in terms of test attributes, attitudes and past experience.
The information from this survey will be useful in the design, development, and implementation of public health interventions that can improve cancer screening rates. Under the Affordable Care Act, most people will have access to CRC screening at no cost, although there will still be some cases where people will have a co-pay. Because cost will be an attribute of the screening tests included in the survey, the survey will provide information on the extent to which cost is a significant barrier to screening relative to other test attributes. Right now, CDC devotes most of its funding on population-based evidence-driven screening promotion activities, such as group education, and less money on actual provision of screening tests. The information from the survey will help identify barriers and educational opportunities to address through screening promotion programs.
The information will be useful not just for CDC, but for other researchers and agencies as well. For example, the Centers for Medicare and Medicaid Service (is also interested in increasing screening among the Medicare population. While Medicare covers several CRC screening tests, the uptake of screening remains below the 2020 target of 70.5%. Understanding individual’s preferences regarding CRC screening would be helpful to eliminating existing barriers and implementing appropriate programs to increase screening which has been shown to reduce mortality and, in some instances, prevent cancer. The planned CDC study will produce valuable information to assist in optimizing screening for the general U.S. eligible population.
The survey instrument will not collect Information in Identifiable Form (IIF). The data collection partner, KN, maintains IIF on its web survey panel (the KnowledgePanel®). This information is not being collected anew for this research study. Under subcontract to RTI, KN will use contact information to draw a nationally representative survey sample in the target age range and to issue unique, secure invitations to sampled individuals. IIF will not be included in the data released to RTI or CDC. The survey collects only non-sensitive information: sociodemographics, past CRC screening questions, and prospective questions surrounding screenings.
Participants who become part of KN’s panel, and therefore become eligible to complete surveys, are given a copy of the KN Privacy and Term of Use Policy outlining what information is collected and how it is used, and data security procedures. The document also explains how panel members can update, restrict, or remove their personal information and how they can quit the panel and ask for data to be destroyed.
The data collection partner, KN, also allows respondents to resume a survey that is suspended if respondents feel uncomfortable or need to take a break in the middle of the administration. All respondents will be at least 50 years of age, and participation is voluntary. Finally, since respondents use their own computers (or computers provided by KN) to complete surveys, they are able to complete the survey at their leisure, selecting the most convenient time and comfortable setting from which to complete the survey.
This study will rely on Web surveys to be self-administered at home on personal computers. The Web panel we are using for this study is KN’s KnowledgePanel®. Attachment 2 provides screen shots from the survey to provide a sense of how the layout will look to respondents.
Utilization of this online panel provides a number of methodological advantages including increased accuracy in measurement of key variables of interest, attractive sample characteristics, and reduced burden on study participants. This approach also yields significant cost efficiencies compared to other modes of data collection such as telephone surveys. These advantages include but are not limited to:
Increased privacy (compared to telephone interviewing) reduces vulnerability to bias from socially desirable survey responses, particularly on subjects such as compliance with recommended health screening tests. Surveys are self-administered in a private setting and respondents do not speak to human interviewers as they would with telephone surveys.
Flexible and timely data collection—Because KN does not involve human interviewers and all ensuing requirements for interviewer training and quality control, it is easier and cheaper to launch surveys very quickly.
Significant cost savings over traditional telephone surveys (due to lack of human interviewers and interviewer training).
Finally, KnowledgePanel® has been used for a number of similar evaluation studies, including CDC media evaluation studies led by RTI. OMB has approved at least 14 studies that have utilized KN’ KnowledgePanel®.
In a literature review conducted to design the current survey, we identified 14 SP surveys that looked at CRC screening published between 1990 and July 2013, an additional article published more recently (Pignone et al., 2014), and two review articles on CRC SP studies (Marshall, McGregor and Currie, 2010 and Ghanouni et al., 2013). Out of the 15 surveys, only three were conducted in the U.S. Because of differences in medical practice and insurance reimbursement, studies conducted in other countries are of limited use for designing U.S. programs. For example, several studies conducted in France focused on stool tests and did not include colonoscopies because they are not widely administered in France. Of the three studies conducted in the U.S., two used a forced-choice design that asks respondents to select one of the screening options and does not offer the choice of not getting a test. With the forced choice design, the researcher does not learn as much about why individuals choose not to get screened, which is one of CDC’s goals. The studies provided useful information in the design of the current survey, but they were not directly applicable to the CDC’s goal to identify public health strategies to promote the use of CRC screening tests. Unlike the surveys identified in our review of the literature, the survey described in this request incorporates CDC education materials to test the impact on choices, a comparison of actual experiences with colonoscopies (revealed preference data) with the attribute levels used in the DCE survey (SP data), and measures of subjective and objective CRC risk as an explanation for choices.
CDC has obtained OMB approval for separate but complementary information collection (Impact Evaluation of CDC’s Colorectal Cancer Control Program, OMB No. 0920-0992, exp. 9/30/2016). In 2013 and 2015, CDC will collect information in several states to evaluate the impact of a program to promote CRC screening, the Colorectal Cancer Control Program (CRCCP). The CRCCP evaluation plan and the Web-based Colorectal Cancer Screening Survey proposed in this ICR have different aims, however findings from the two surveys will complement and reinforce each other. Both surveys will help CDC understand factors that influence whether individuals get screening for CRC screening. Exhibit A.2 describes the differences between the two surveys.
Exhibit A.2. Comparison of the Colorectal Cancer Screening Survey and the CRCCP Evaluation
Survey |
Colorectal Cancer Screening Survey |
Colorectal Cancer Control Program (CRCCP) Evaluation Survey |
Geographic coverage of sample |
National sample
|
6 states: 3 CRCCP grantee states (Alabama, Nebraska, Washington); 3 non-CRCCP states (Tennessee, Oklahoma, Wisconsin) |
Sample age range and size |
Age of sample: 50–75 Sample size: 2,000 |
Age of sample: 50–75 Sample size: 3,200 respondents |
Mode of survey administration |
Web-based survey administered to a random sample from the KN KnowledgePanel |
Telephone |
Number of data collections |
One-time data collection |
Two surveys administered pre- and post-intervention (Fall 2013 and Fall 2015) |
(continued)
Exhibit A.2. Comparison of the Colorectal Cancer Screening Survey and the CRCCP Evaluation (continued)
Survey |
Colorectal Cancer Screening Survey |
Colorectal Cancer Control Program (CRCCP) Evaluation Survey |
Purpose and approach |
Understand preferences for CRC screening tests and barriers to getting screened. Employs the DCE survey approach to provide a quantitative ranking of CRC screening test attributes and test the impact of respondent’s characteristics, risk perceptions, CRC screening experience and information treatments on preferences for CRC screening tests. |
Support a rigorous impact evaluation of the CRCCP, a new public health model intended to increase population-level CRC rates. |
Questions |
Survey includes questions on individual knowledge, attitudes, and CRC screening behavior
Plus:
|
The population survey will provide data on proximal outcomes of interest (e.g., individual-level knowledge and attitudes about CRC screening) and data on CRC screening behavior. |
The Colorectal Cancer Screening Survey will incorporate some of the questions from the survey to evaluate the CDC’s CRCCP (see below for a list of the questions from other surveys incorporated into this survey). Together the data from the two surveys will provide a more complete picture of screening preferences and the effectiveness of public health interventions to raise screening rates.
Some of the questions in the Colorectal Cancer Screening Survey are drawn from other survey instruments. Using questions from existing surveys help provide external comparisons for sample characteristics and survey results. Exhibit A.3 provides information on questions taken from other surveys.
Exhibit A.3. Sources for Questions
Questions or Text |
Source |
The text and image on Attachment 2 page 4 describing CRC |
Text: http://www.cdc.gov/cancer/colorectal/basic_info/facts.htm Image: http://digestive.niddk.nih.gov/ddiseases/pubs/virtualcolonoscopy/ |
Questions 1, 4, 5, 73–85 |
The Colon Cancer Risk Calculator developed by the Siteman Cancer Center at Barnes Jewish Hospital and Washington University School of Medicine (http://www.yourdiseaserisk.wustl.edu/YDRDefault.aspx?ScreenControl=YDRGeneral&ScreenName=YDRcolon). |
Questions 7, 40, 42, 45-47, 51 (7, 42, 47 and 51 slightly revised from NHIS wording) |
2010 National Health Interview Survey (NHIS; OMB No. 0920-0914, exp. 1/31/2013) questions about colon cancer and screening tests ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Survey_Questionnaires/NHIS/2010/English/qcancer.pdf |
Questions 55–59 |
From the survey for the Impact Evaluation of CDC’s Colorectal Cancer Control Program (CRCCP), OMB No. 0920-0992. |
Questions 64–67 |
CDC Healthy Days items from HRQOL-4 |
Questions 68–72 |
2010 NHIS questions about where respondent gets medical treatment (OMB No. 0920-0914, exp. 1/31/2013). ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Survey_Questionnaires/NHIS/2011/English/qadult.pdf |
The survey will only be sent to households, and will not impact small businesses or other small entities.
This is not a periodic data collection. If the Agency did not conduct the survey, CDC would lack quantitative information on public preferences for CRC screening cancer that could be used to design more effective screening tests and more effective public and provider education materials. Improving CDC’s understanding of screening will help public health officials respond to the large, preventable burden of disease from cancer that remains, despite decades of improved cancer screening and therapies. The survey will examine how changing messaging and types of tests used can influence people’s willingness to receive these treatments. As outlined in Section A1, this has large impacts for health care spending, both at the private and public level.
All information collection and recordkeeping activities in this submission are consistent with the guidelines in 5 CFR 1320.6.
A. A 60-day Federal Register Notice was published in the Federal Register on September 5, 2013, vol. 78, no. 172, page 54653 (see Attachment 4). One public comment received and acknowledged. See Attachment 5 for a summary of public comments.
B. Efforts to Consult Outside the Agency
The survey was reviewed by several other federal agencies. Dr. Louis B. Jacques (Director, Coverage & Analysis Group, Center for Clinical Standards and Quality, Centers for Medicare & Medicaid Services) provided the following comment:
“Since incidence and prevalence of colorectal cancer increase with age, screening for CRC is important for the Medicare population. While Medicare covers several CRC screening tests, the uptake of screening remains below the 2020 target of 70.5%. Understanding why individuals do not undergo screening would be helpful to eliminating existing barriers and implementing appropriate programs to increase screening which has been shown to reduce mortality and, in some instances, prevent cancer. The planned CDC study would produce valuable information to assist in optimizing screening for the general U.S. population and particularly for the Medicare population.”
Dr. Ann M. Geiger (Chief, Health Services and Economics Branch, Applied Research Program, division of Cancer Control and Population Sciences, National Cancer Institute) provided a letter of support stating:
“Thank you for the opportunity to review the Colorectal Cancer Screening Survey proposal. The study you propose will help address a critical question in the literature: what characteristics of the different colorectal cancer screening modalities do patients prefer? National Health Interview Survey data (2010) show that a 58.6% of adults 50 and older were not screened (well below the 2020 target of 70.5%). The USPSTF recommends a range of screening modalities, including blood stool testing and colonoscopy, but the screening modality used most frequency and increasing most rapidly is colonoscopy. Colonoscopy is expensive and invasive, and may not be preferred by all adults in the age-eligible range for screening. The literature provides little guidance about why some individuals do not undergo CRC screening or what test they would prefer. The proposed study would produce valuable information to help the scientific community better understand patient preference in the general U.S. population and deserves strong support.”
All respondents will be recruited from an online research panel (the “KnowledgePanel®”) maintained by the data collection partner, KN. Research participants will be offered a small non-cash incentive by KN to complete the proposed research survey. KN requires that any survey to its panelists provide such “points,” which can be redeemed for raffle entries, various gifts, or cash at regular intervals. The total monetary equivalent of the points for this one-time survey will not exceed $10. This honorarium is intended to recognize the time burden placed on the participants, encourage their cooperation, and to convey appreciation for contributing to this important study. Numerous empirical studies have shown that honoraria can significantly increase response rates (Abreu & Winters, 1999; Shettle & Mooney, 1999). The decision to use honoraria for this study is based on findings reported in current research publications and several projects conducted by KN and RTI, which found that use of an honorarium increases response rates among adults. All participant remuneration will be approved by the RTI International Institutional Review Board (IRB, see Attachment 6 for RTI IRB approval).
All procedures have been developed, in accordance with federal, state, and local guidelines, to ensure that the rights and privacy of participants are protected and maintained. The RTI IRB reviewed and approved all instruments, informed consent materials, and data collection and management procedures (Attachment 6 for RTI IRB approval).
Privacy Act Determination
This submission has been reviewed by CDC’s National Center for Chronic Disease Prevention and Health Promotion and CDC’s Information Collection Review Office, which have determined that the Privacy Act does not apply. Although identifiable information about respondents will be used to facilitate initial contact and follow-up, the identifying information is maintained in a secure, pre-existing records system owned by KN. The response data transmitted from KN to RTI International, the data analysis contractor, will be de-identified prior to transmission and analysis.
All data files on multi-user systems will be under the control of a database manager, with access limited to project staff on a “need-to-know” basis only. KN has developed a secure transmission and collection protocol, including the use of system passwords, and two separate sets of firewalls to prevent unauthorized access to the system. Neither questionnaires nor survey responses are stored onto the KN-provided laptops; questionnaires are administered dynamically over the Internet. Survey responses are written in real-time directly to KN’ server and are then stored in a local Oracle database. The database is protected primarily through firewall restrictions, password protection, and 128-bit encryption technology. Individual identifying information will be maintained separately from completed questionnaires and from computerized data files used for analysis. A detailed description of KN privacy safeguards is provided with this submission (Attachment 7). No respondent identifiers will be contained in reports to CDC and results will only present data in aggregate.
All respondents will be assured that the information they provide will be maintained in a secure manner and will be used only for the purpose of this research. Please refer to the assurances and study descriptions that are included in the Consent Process (Attachment 2 pages 4-6). Respondents will be told that the information obtained from all of the surveys will be combined into a summary report so that details of individual questionnaires cannot be linked to a specific participant.
Respondents will participate on a voluntary basis. The voluntary nature of the information collection is described in the introductory section of the Consent Process (see Attachment 2 pages 4-6) and the initial contact email (Attachment 8).
The majority of questions asked will not be of a sensitive nature. However, it will be necessary to ask some questions that may be considered to be of a sensitive nature in order to assess specific health behaviors related to cancer screening tests and cancer history in the respondent’s family. These questions are essential to the objectives of this information collection. To address any concerns about inadvertent disclosure of sensitive information, respondents will be fully informed of the applicable privacy safeguards.
Participants will be provided with a specific toll-free phone number (linking directly to the RTI IRB Office) to call in case there is a question or concern about the sensitive issue.
Web surveys are entirely self-administered and maximize respondent privacy without the need to verbalize responses.
Finally, KN will only provide data to RTI and CDC with all identifiers removed.
Information collection will include a web-based survey instrument (see Colorectal Cancer Screening Survey, Attachment 2, which also provides an overview of the instrument). The survey instrument will consist of a fixed portion of core questions that are presented to all respondents, and two variable portions. The first variable portion is an information treatment where respondents will be randomly assigned to get no additional information or to view one of two information sheets on CRC screening developed as part of CDC’s Screen for Life program. The second variable portion will consist of 5 DCE questions that ask respondents to choose between different hypothetical CRC screening tests. Attachment 3 provides a summary of the experimental design and 10 blocks of DCE questions that relate to the major variable portion of the survey instrument.
Exhibit A.4A summarizes the burden estimates for data collection. For the pretest, we assume that 30 respondents (approximately 70% of those sent an email invitation) will take the pretest, which will take approximately 22 minutes to complete for the respondents who do not receive one of the two information treatments and 25 minutes for those who receive one of the two information treatments. For the final survey, we assume that approximately 70% (2,000) will take the final survey, which will take approximately 22 minutes to complete for the respondents who do not receive one of the two information treatments and 25 minutes for those who receive one of the two information treatments.
Exhibit A.4A. Estimated Annualized Burden to Respondents
Type of Respondent |
Form Name |
No. of Respondents |
No. of Responses per Respondent |
Average Burden per Response (in hr) |
Total Burden (in hr) |
Pre-Test Participants |
Colorectal Cancer Screening Survey – control group (no information treatment) |
10 |
1 |
22/60 |
4 |
Colorectal Cancer Screening Survey – information treatment groups |
20 |
1 |
25/60 |
8 |
|
Study Participants |
Colorectal Cancer Screening Survey – control group (no information treatment) |
667 |
1 |
22/60 |
245 |
Colorectal Cancer Screening Survey – information treatment groups |
1,333 |
1 |
25/60 |
555 |
|
|
Total |
812 |
Total Cost: Exhibit A.4B summarizes the estimated value of the time that respondents dedicate to participating in the survey. The estimated average hourly wage is $21.77 per hour (based on an average hourly wage of $21.77, see BLS Employer costs for Employee Compensation—December 2013 at http://www.bls.gov/news.release/ecec.t01.htm ). Respondents will spend an annual total of 812 hours at a cost of $17,677.
Exhibit A.4B. Estimated Annualized Cost to Respondents
Type of Respondent |
Form Name |
No. of Respondents |
Total Burden (in hr) |
Hourly Wage Rate |
Total Cost |
|
Colorectal Cancer Screening Survey – control group (no information treatment) |
10 |
4 |
$21.77 |
$87 |
Colorectal Cancer Screening Survey – information treatment groups |
20 |
8 |
$21.77 |
$174 |
|
Study Participants |
Colorectal Cancer Screening Survey – control group (no information treatment) |
667 |
245 |
$21.77 |
$5,334 |
Colorectal Cancer Screening Survey – information treatment groups |
1,333 |
555 |
$21.77 |
$12,082 |
|
|
Total |
$17,677 |
None.
This information collection will occur in 2014, thus the annual cost to the Federal government is estimated to be $208,681 (Exhibit A.5). This information collection is funded through a contract with RTI International. The total estimated costs attributable to this data collection for the contract with RTI are $200,019. This includes the full cost of the data collection, data analysis, survey development and project management (literature review, coordination with CDC, instrument development, reporting, RTI IRB and progress reporting and project management). Two CDC health economists are responsible for overseeing the content of this information collection, overall project management, and coordination with other CDC activities at a cost of $8,622.
Exhibit A.5. Cost to Government
Itemized Cost to the Federal Government |
|||
CDC Staff Member |
Annual Salary |
% Allocation (Annualized) |
Cost |
GS-14 |
$107,770 |
8% |
$8,622 |
|
|
Subtotal, CDC Personnel |
$8,662 |
Contractual Costs for Data Collection and Management (RTI) |
|
Subtotal, Contractual Costs |
$200,019 |
|
|
Total |
$208,681 |
This is a new data collection.
After data collection is complete, RTI will prepare a final report presenting the results from the survey. Section B.2 describes the analytical techniques that will be used to summarize the results. It should be noted that while the KN panel’s recruitment procedures are designed to approximate a nationally representative sample, the limitations associated with the panel decrease our capacity to draw nationally representative conclusions. Although KN panelists must be invited to participate and cannot volunteer on their own, there may be systematic differences between individuals who choose to join an ongoing internet panel and the type of individuals who do not wish to participate in either an internet panel and/or over an ongoing committee. In addition, while KN goes to great lengths to ensure that persons who are not web-users are included in the sample, the web medium may introduce some bias towards panelists more comfortable with web-based communication. Therefore, evaluation results must be interpreted with appropriate caution regarding our ability to generalize the findings to the national population.
The time schedule for the project is presented in Exhibit A6. The reporting and dissemination mechanism will consist of three primary components: (1) summary statistics (in the form of Power Point presentations and other briefings; (2) a report summarizing findings from this information collection and (3) at least one peer-reviewed journal article that documents the preferences for CRC screening test attributes. In recognizing the aforementioned data limitations, all communications about the evaluation results via these publication formats will carefully enumerate and describe those data limitations and ensure that evaluation results are interpreted with appropriate care and caution
Date |
Project Activity |
June 2014 |
Survey to KN for programming |
June 2014 |
Pretest of survey on KN |
July 2014 |
Field final survey |
August 2014 |
Data analysis |
September 2014 |
Draft report and manuscript |
September 2014 |
Final report and manuscript submitted to peer reviewed journal |
Not Applicable.
There are no exceptions to the certification.
Abreu, D. A., & Winters, F. G. (1999). Using monetary incentives to reduce attrition in the survey of income and program participation. Proceedings of the Survey Research Methods Section, American Statistical Association, Washington, DC, 17, 533–538.
Blumenschein, K., Blomquist, G. C., Johannesson, M., Horn, N., & Freeman, P. (2008). Eliciting willingness to pay without bias: evidence from a field experiment. The Economic Journal, 118(January), 114–137.
Champ, P. A., Bishop, R. C., Brown, T. C., & McCollum, D. W. (1997). Using donation mechanisms to value nonuse benefits from public goods. Journal of Environmental Economics and Management, 33(2), 151–162.
ChoiceMetrics. Ngene 1.1.1 User Manual & Reference Guide. Australia, 2012.
Community Preventive Services Taskforce. Cancer Prevention and Control 2012; Available from: http://www.thecommunityguide.org/cancer/index.html. Accessed on April 11, 2014.
Cummings, R. G., & Taylor, L. O. (1999). Unbiased value estimates for environmental goods: cheap talk design for the contingent valuation method. American Economic Review, 89(3), 649–665.
Department of Health and Human Services. Healthy People 2020 Disparities in Clinical Preventive Services. 2013. Available from: http://www.healthypeople.gov/2020/LHI/clinicalPreventive.aspx?tab=data. Accessed on April 11, 2014.
Klabunde, C.N., Joseph, D.A., King, J.B. White, A, & Plescia, M. (2013). Vital signs: colorectal cancer screening test use - United States, 2012. MMWR. Morbidity and Mortality Weekly Report, 2013; 62(44):881-8.
Landry, C., & List, J. A. (2007). Using ex ante approaches to obtain credible signals for value in contingent markets: evidence from the field. American Journal of Agricultural Economics, 89(2), 420–429
Marshall, D., McGregor, S. E., & Currie, G. (2010). Measuring preferences for colorectal cancer screening: what are the implications for moving forward? Patient, 3(2), 79–89. doi: 10.2165/11532250-000000000-00000. PubMed PMID: 22273359.
Pignone, M.P., Crutchfiel, T.M., Brown, P., Hawlwy, S.T., Laping, J., Lewis, C.L., Lich, K. H., Richardson, L., Tangka, F., Wheeler, S.B. (2014) Using a discrete choice experiment to inform the design of programs to promote colon cancer screening for vulnerable populations in North Carolina. In Press
Planning Committee on Estimating the Contributions of Lifestyle-Related Factors to Preventable Death, Board on Population Health and Public Health Practice (BPH), Institute of Medicine (IOM). (2005). Estimating the contributions of lifestyle-related factors to preventable death: a workshop summary planning committee on estimating the contributions of lifestyle-related factors to preventable death. Washington, DC: IOM.
Shettle, C., & Mooney, G. (1999). Monetary incentives in U.S. government surveys. Journal of Official Statistics, 15(2), 231–250.
U.S. Cancer Statistics Working Group. (2013). United States Cancer Statistics: 1999–2009 Incidence and Mortality Web-based Report. Atlanta (GA): Department of Health and Human Services, Centers for Disease Control and Prevention, and National Cancer Institute. Retrieved from http://www.cdc.gov/uscs.
Viney R, Lancsar E, Louviere J. Discrete choice experiments to measure consumer preferences for health and health care. Expert Rev of Pharmacoecon Outcomes Res. 2002 Aug;2(4):319-26.
Zauber, A.G., Lansdorp-Vogelaar, I., Knudsen, A. B., Wilschut, J., van Ballegooijen, M., & Kuntz, K.M. (2008). Evaluating Test Strategies for Colorectal Cancer Screening: A Decision Analsyis for the U.S. Preventative Services Task Force. Annals of Internal Medicine, 149(9): 659-669.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | cannada |
File Modified | 0000-00-00 |
File Created | 2021-01-27 |