Quantitative Information in Direct-to-Consumer Television Advertisements
0910- NEW
SUPPORTING STATEMENT
Terms of Clearance:
None.
A. Justification
Circumstances Making the Collection of Information Necessary
Section 1701(a)(4) of the Public Health Service Act (42 U.S.C. 300u(a)(4)) authorizes the FDA to conduct research relating to health information. Section 1003(d)(2)(c) of the Federal Food, Drug, and Cosmetic Act (the FD&C Act) (21 U.S.C. 393(b)(2)(c)) authorizes FDA to conduct research relating to drugs and other FDA regulated products in carrying out the provisions of the FD&C Act.
A previous FDA study found that simple quantitative information could be conveyed in direct-to-consumer (DTC) television ads in ways that increased consumer’s knowledge about the drug (OMB Control Number 0910-0663; “Experimental Study: Presentation of Quantitative Effectiveness Information to Consumers in Direct-to-Consumer (DTC) Television and Print Advertisements for Prescription Drugs”).1 However, this research only tested simple information (e.g., one clinical trial, comparison to placebo). Drug information can be much more complicated (e.g., complicated endpoints, multiple study arms). The following studies are designed to address the question of whether consumers can use more complicated information when assessing prescription drug information in DTC television ads. These studies will build on previous research by (1) examining more complicated quantitative information, (2) examining quantitative information for both benefits and risks, and (3) examining how visuals designed to represent efficacy interact with quantitative information.
Purpose and Use of the Information Collection
The purpose of this project is to gather data for the FDA to address issues surrounding the presentation of risk and benefit information in DTC television ads. Part of FDA’s public health mission is to ensure the safe use of prescription drugs; therefore it is important to communicate the risks and benefits of prescription drugs to consumers as clearly and usefully as possible.
The objective of this project is to test consumers’ understanding of quantitative information about prescription drugs in DTC television ads. In Study 1, we plan to examine experimentally the presence and complexity of quantitative benefit and risk information in DTC television ads. We hypothesize that, replicating past studies, adding simple quantitative information about benefits and risks will lead to increased understanding among consumers. We will test whether adding complex quantitative information results in the same outcomes as simple quantitative information or whether adding complex information results in worse outcomes. In Study 2, we plan to examine experimentally the presence of quantitative benefit information and how the ad visually represents efficacy (by having no images, images that accurately reflect the improvement in health that could be expected with treatment, or images that overstate the improvement in health that could be expected with treatment). We hypothesize that overstated images of improvement will lead consumers to overestimate the drug’s efficacy; however, adding a quantitative claim may moderate this effect.
Use of Improved Information Technology and Burden Reduction
Automated information technology will be used in the collection of information for this study. One hundred percent (100%) of participants will self-administer the survey via the Internet, which will record responses and provide appropriate probes when needed. In addition to its use in data collection, automated technology will be used in data reduction and analysis. Burden will be reduced by recording data on a one-time basis for each participant, and by keeping surveys to less than 20 minutes.
Efforts to Identify Duplication and Use of Similar Information
We conducted a literature search to identify duplication and use of similar information. As noted above, there is a previous FDA study on simple quantitative efficacy information in DTC television ads. We are also aware of studies examining the inclusion of simple efficacy and risk quantitative information in DTC print ads. However, to our knowledge there is no research on the inclusion of more complex quantitative information, nor is there research on how quantitative efficacy information may interact with images of improvement used in DTC television ads.
Impact on Small Businesses or Other Small Entities
No small businesses will be involved in this data collection.
Consequences of Collecting the Information Less Frequently
The proposed data collection is one-time only. There are no plans for successive data collections.
Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for this collection of information.
Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
In accordance with 5 CFR 1320.8(d), FDA published a 60 day notice for public comment in the FEDERAL REGISTER of October 13, 2015 (80 FR 61433). Four submissions to the docket were received. Two submissions called for direct-to-consumer prescription drug advertising to be banned. These submissions are outside the scope of the current project. The other two contained several points. Comments from the other two submissions and their responses follow.
Comment 1. The first suggestion was that FDA should research the health literacy of approved patient labeling before conducting research on DTC television advertising.
Response: FDA has a program of research that includes studies on both patient labeling2,3 and DTC television advertising. This study extends previous research and addresses issues unique to DTC television advertising (e.g., visual representations of efficacy). The public is exposed to information about prescription drugs via DTC television advertising4 and this advertising has a public health impact.5 We disagree that there is a need for approved patient labeling research to be conducted before we study issues unique to DTC television advertising.
Comment 2. The second suggestion is to consider that because low numeracy individuals are not well-represented in online panels we should implement mechanisms to help validate results across health-literate populations.
Response: We agree that numeracy may be a crucial variable in this study. We’ve added a second measure of numeracy (subjective numeracy) and a question on health literacy. We will use these measures to determine whether and how numeracy and health literacy affect our results. If our sample has few individuals with low numeracy we will note this as a limitation.
Comment 3. The third suggestion is to use a mixed-method approach, recruiting limited-literacy and low socio-economic participants for in-person administration of the study and using the Internet panel to gather a broad sample.
Response. We acknowledge that Internet administration is not perfect and have chosen this method to maximize our budget. We will permit the survey to be taken on a variety of devices. We are excluding phones because the stimuli cannot be fully viewed on a very small screen.
Comment 4. The fourth suggestion is to use frequencies rather than percentages in the questionnaire.
Response. A recent review of the literature did not support the view that frequencies are more widely understood than percentages.6 This review included two studies conducted in the context of DTC advertising.1,7 Given these findings, we plan to use percentages in the questionnaire.
Comment 5. The fifth suggestion is to include a single-item health literacy question to the screener.
Response. We agree this is an important measure and have added it to the questionnaire.
Comment 6. This comment requests further rationale for the selection of an older patient population and its impact on the generalizability of study findings to advertisements targeted for younger patient populations.
Response. Advertising studies often recruit participants who have or who are at risk for the medical condition being advertised to increase interest in the ad and motivation to pay attention to the ad. Older participants are more likely to be at risk for cataracts. In addition, older adults use more prescription drugs8 and watch more television than younger adults do.9 We will note that the study is not broadly generalizable when we report our findings.
Comment 7. This comment suggests including a video compatibility test to verify that participants can view the videos and precluding participants from taking the survey using a smart phone device.
Response. We have added a video compatibility test to the study and will preclude participants from using phones.
Comment 8. This comment also sought clarification on which stimuli from Study 1 will be used in Study 2.
Response. The benefit information in Study 2 will be the “simple” claim from Study 1. Study 2 will not include quantitative risk information. This means that the same ad will be used in the “simple quantitative benefit claim/no quantitative risk claim” condition in Study 1 and the “quantitative benefit claim/no images of improvement” condition in Study 2.
Comment 9. This comment expresses concern that adding complex benefit information in Study 1 may cause the content to become unmanageable and suggests adding study arms with more of fewer risks and benefits to asses this.
Response. Based on this comment and peer reviewer feedback, we will manipulate the complexity of quantitative efficacy claim by adding a second benefit outcome. We’ve revised the study design tables to reflect this (see Tables 1 and 2). The number of risks will be constant but we will manipulate whether and how the frequencies of the risks are presented.
Comment 10. This comment recommended holding all other aspects outside the variable being tested be held constant across the different treatments.
Response. We agree with this recommendation. We will create one ad that will be the basis of all the stimuli. We will manipulate this base ad by adding quantitative benefit information, quantitative risk information, and/or images of improvement to create the different experimental conditions, while leaving other factors constant.
Comment 11. This comment recommends using scales with a neutral midpoint.
Response. There are advantages and disadvantages to including midpoints in scales.10,11 Based on responses from similar studies, we have decided to use scales without a midpoint. Instead, we have included a “don’t know” option for some items which may make participants’ responses easier to interpret than a neutral midpoint would.
Comment 12. This comment noted that without the stimuli it was difficult to tell whether the battery of questions measuring efficacy accuracy was redundant or inapplicable.
Response. We did not create the stimuli before the public notice so that the public and peer review comments, along with cognitive interviews and pretesting, could inform the creation of the stimuli. Based on peer review we refined our efficacy claims. We tailored the efficacy accuracy items to reflect the new claims. Some of these questions are designed to measure participants’ gist understanding of the drugs’ efficacy likelihood and magnitude.12 They are not redundant with the questions designed to measure participants’ verbatim understanding of the drugs’ efficacy likelihood and magnitude. As in previous research,1 participants in the control condition will not have the information to answer all the accuracy questions. Instead, this condition serves as a baseline with which to compare the experimental conditions. We added a “don’t know” option so that these participants can report that they do not know the answer.
Comment 13. This comment suggested re-ordering questions so that the perception and intention questions appeared before the questions about efficacy and risk information.
Response. Based on peer review, we moved the gist questions before the accuracy questions, but we did not move intentions and perceptions before gist and accuracy. We understand the value in getting obtaining intentions and perceptions unbiased by the other measures. However, we put the gist and accuracy measures first because they are our primary measures; therefore, we want to decrease potential memory decay and ensure the gist and accuracy measures are not biased.
Comment 14. This comment questioned whether three risk claim accuracy questions in Study 1 were redundant with each other and how the stimulus will list frequencies for the risks.
Response. We updated Table 1 to show how risks will be described in each condition. The terms “least common” and “most common” will not be used in the ads. The questions are not redundant. One question (previously Q17) asks participants to report the frequency for each risk. The other two questions (previously Q20 and Q21) ask participants whether they got the “gist” of how common the risks are. If participants are able to understand the gist of the information, then those in the two quantitative risk information conditions should be able to report that the most common risks had a frequency of roughly10% and participants in the specific quantitative risk information condition should be able to report that the least common risks had a frequency of roughly 1%. We will cognitively test and pretest these items.
Comment 15. This comment suggests adding “don’t know” options to the perceived efficacy and risk questions.
Response. We added a “don’t know” option to the questions that ask participants to compare the advertised drug’s risks and benefits to other treatments.
External Reviewers
In addition to public comment, OPDP sent materials and received comments from three individuals for external peer review in 2015. These individuals are:
1. D.K. Theo Raynor, Pd.D., Professor, University of Leeds, d.k.raynor@leeds.ac.uk
2.
Peter Ubel, M.D., Professor, Duke University, peter.ubel@duke.edu
3.
Brian Zikmund-Fisher, Ph.D., Associate Professor, University of
Michigan, bzikmund@umich.edu
Explanation of Any Payment or Gift to Respondents
For completing a survey, participants will receive approximately $5.00 in e-Rewards currency which can be exchanged in the Research Now marketplace for a variety of items (airline miles, hotel points, magazines, movie tickets, etc.).
Following OMB’s “Guidance on Agency and Statistical Information Collections,” we offer the following justification for our use of these incentives.
Data quality: Because providing a market-rate incentive should increase response rates, it should also significantly improve validity and reliability to an extent beyond that possible through other means. Previous research suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as those with lower education).13 Furthermore, there is some evidence that using incentives can reduce nonresponse bias in some situations by bringing in a more representative set of respondents.14 This may be particularly effective in reducing nonresponse bias due to topic saliency.15
Past experience: Research Now, the contractor for this study, has conducted hundreds of health-related surveys in the past year. Research Now offers incentives to its panel members for completing surveys, with the amount of incentive for consumer surveys determined by the length of the survey. Their experience indicates that the requested amount is reasonable for a 20 minute survey.
Reduced survey costs: Recruiting with market-rate incentives is cost-effective. Lower participation rates will likely impact the project timeline because participant recruitment will take longer and, therefore, data collection will be slower and more costly.
Assurance of Confidentiality Provided to Respondents
All participants will be provided with an assurance of privacy to the extent allowable by law (see Appendix A for the consent form).
Participants will access the survey questionnaires by using a unique, secure Web URL that is e-mailed to them. Individuals then access the survey by entering an appropriate e-mail address and password. Panel members’ passwords are stored in a secured state within Research Now’s panel management database software. Throughout the survey, questionnaire data are copied to a secured, centralized database for data processing. All data transfers of survey responses from participants’ personal computers to the main servers pass through redundant firewalls. Research Now provides strong encryption methods for transmitting and receiving data. Research Now provides use of secret-key and public-key cryptography combinations to protect sensitive information. The privacy of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by part 20 of the agency’s regulations (21 CFR part 20).
No personally identifiable information will be sent to FDA. All information that can identify individual participants will be maintained by the independent contractor in a form that is separate from the data provided to FDA. For all data, alpha numeric codes will be used instead of names as identifiers. These identification codes (rather than names) are used on any documents or files that contain study data or participant responses.
All data will also be maintained consistent with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products). These methods will all be approved by FDA’s Institutional Review Board (Research Involving Human Subjects Committee, RIHSC) and RTI’s Institutional Review Board prior to collecting any information.
11. Justification for Sensitive Questions
This data collection will not include sensitive questions. The complete list of questions is available in Appendix B (Study 1) and Appendix C (Study 2).
12. Estimates of Annualized Burden Hours and Costs
12a. Annualized Hour Burden Estimate
FDA estimates the burden of this collection of information as follows:
Table 1.--Estimated Annual Reporting Burden – Study 1 |
|||||
Activity |
No. of Respondents |
No. of Responses per Respondent |
Total Annual Responses |
Average Burden per Response |
Total Hours |
Sample outgo |
15,130 |
-- |
-- |
-- |
-- |
Number to complete the screener (10%) |
1,513 |
1 |
1513 |
.05 (3 min.) |
76 |
Number eligible for survey (70%) |
1,059 |
-- |
|
-- |
|
Number to complete the survey (85%) |
900 |
1 |
900 |
.33 (20 min.) |
297 |
Total |
|
|
2,413 |
|
373 |
Table 2.--Estimated Annual Reporting Burden – Study 2 |
|||||
Activity |
No. of Respondents |
No. of Responses per Respondent |
Total Annual Responses |
Average Burden per Response |
Total Hours |
Sample outgo |
15,130 |
-- |
-- |
-- |
-- |
Number to complete the screener (10%) |
1,513 |
1 |
1,513 |
.05 (3 min.) |
76 |
Number eligible for survey (70%) |
1,059 |
-- |
|
-- |
|
Number to complete the survey (85%) |
900 |
1 |
900 |
.33 (20 min.) |
297 |
Total |
|
|
2,413 |
|
373 |
These estimates are based on FDA’s and the contractor’s experience with previous consumer studies.
13. Estimates of Other Total Annual Costs to Respondents and/or Recordkeepers/Capital Costs
There are no capital, start-up, operating or maintenance costs associated with this information collection.
14. Annualized Cost to the Federal Government
The total estimated cost to the Federal Government for the collection of data is $321,746 ($160,873 per year for two years). This includes the costs paid to the contractors to program the study, draw the sample, collect the data, and create a database of the results ($271,826). The contract was awarded as a result of competition. Specific cost information other than the award amount is proprietary to the contractor and is not public information. The cost also includes FDA staff time to design and manage the study, to analyze the data, and to draft a report ($49,920; 8 hours per week for 2 years).
15. Explanation for Program Changes or Adjustments
This is a new data collection.
Plans for Tabulation and Publication and Project Time Schedule
Conventional statistical techniques for experimental data, such as descriptive statistics, analysis of variance, and regression models, will be used to analyze the data. See Section B below for detailed information on the design, hypotheses, and analysis plan. The Agency anticipates disseminating the results of the study after the final analyses of the data are completed, reviewed, and cleared. The exact timing and nature of any such dissemination has not been determined, but may include presentations at trade and academic conferences, publications, articles, and Internet posting.
Table 3. – Project Time Schedule |
|
Task |
Estimated Number of Weeks after OMB Approval |
Main study data collected |
45 weeks |
Final methods report completed |
58 weeks |
Final results report completed |
70 weeks |
Manuscript submitted for internal review |
88 weeks |
Manuscript submitted for peer-review journal publication |
98 weeks |
Reason(s) Display of OMB Expiration Date is Inappropriate
No exemption is requested.
Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exceptions to the certification.
1 O’Donoghue, A.C., Sullivan, H.W., Aikin, K.J., Chowdhury, D., Moultrie, R.R., & Rupert, D.J. (2014). Presenting efficacy information in direct-to-consumer prescription drug advertisements. Patient Education and Counseling, 95(2), 271-280.
2 Boudewyns, V., O’Donoghue, A.C., Kelly, B., West, S.L., Oguntimein, O., Bann, C.M., & McCormack, L.A. (2015). Influence of patient medication information format on comprehension and application of medication information: A randomized, controlled experiment. Patient education and counseling, 98(12), 1592-1599.
3 Kish-Doto, J., Scales, M., Equino-Medina, P., Fitzgerald, T., Tzeng, J.P., McCormack, L.A., O’Donoghue, A., Oguntimein, O., & West, S.L. (2014). Preferences for Patient Medication Information: What do patients want? Journal of Health Communication, 19 supl.2, 77-88.
4 Brownfield, E.D., Bernhardt, J.M., Phan, J.L., Wiliams, M.V., & Parker, R.M. (2004). Direct-to consumer drug advertisements on network television: An exploration of quantity, frequency, and placement. Journal of Health Communication, 9(6), 491-497.
5 Niederdeppe, J., Byrne, S., Avery, R.J., & Cantor, J. (2013). Direct-to-consumer television advertising exposure, diagnosis with high cholesterol, and statin use. Journal of General Internal Medicine, 28(7), 886-893.
6 Zipkin, D.A., Umscheid, C.A., Keating, N.L., Allen, E., Aung, K., Beyth, R., ... & Feldstein, D.A. (2014). Evidence-based risk communication: A systematic review. Annals of Internal Medicine, 161, 270-280.
7 Woloshin, S., & Schwartz, L.M. (2011). Communicating data about the benefits and harms of treatment: a randomized trial. Annals of Internal Medicine, 155, 87-96.
8 Zhong, W., Maradit-Kremers, H., St. Sauver, J.L., Yawn, B.P., Ebbert, J.O., Roger, V.L., Jacobson, D.J., McGree, M.E., Brue, S.M., & Rocca, W.A. (2013). Age and sex patterns of drug prescribing in a defined American population. Mayo Clinic Proceedings, 88(7), 697-707.
9 Depp, C.A., Schkade, D.A., Thompson, W.K., & Jeste, D.V. (2010). Age, affective experience, and television use. American Journal of Preventive Medicine, 39, 173-178.
10 Moors, G. (2008). Exploring the effect of a middle response category on response style in attitude measurement. Quality & Quantity, 42(6), 779-794.
11 Sturgis, P., Roberts, C., & Smith, P. (2014). Middle alternatives revisited: How the neither/nor response acts as a way of saying “I don’t know?” Sociological Methods & Research, 43(1), 15-38.
12 Reyna, V.F. (2004). How people make decisions that involve risk: a dual-process approach. Current Directions in Psychological Science,13, 60-66.
13 Guyll, M., Spoth, R., & Redmond, C. (2003). The effects of incentives and research requirements on participation rates for a community-based preventive intervention research study. Journal of Primary Prevention, 24(1), 25-41.
14 Castiglioni, L., & Pforr, K. (2007). The effect of incentives in reducing non-response bias in a multi-actor survey. Presented at the 2nd annual European Survey Research Association Conference, Prague, Czech Republic, June, 2007; Singer, E. (2002). The Use of Incentives to Reduce Nonresponse in Household Surveys. (R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little, Eds.) Survey nonresponse, (051), 163-178. University of Michigan Institute for Social Research. Retrieved from http://www.isr.umich.edu/src/smp/Electronic; Singer, E. (2006). Nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 637-645.
15 Groves, R., Couper, M., Presser, S., Singer, E., Tourangeau, R., Acosta, G., & Nelson, L. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720-736.
File Type | application/msword |
File Title | [Insert Title of Information Collection] |
Author | jcapezzu |
Last Modified By | Mizrachi, Ila |
File Modified | 2016-03-31 |
File Created | 2016-03-31 |