SUPPORTING STATEMENT: PART A
OMB No. #
DOP Cross-Site Program Implementation Evaluation of Overdose Data to Action Program
September 18, 2020
Centers for Disease Control and Prevention
National Center for Injury Prevention and Control
4770 Buford Highway NE MS F-64
Atlanta, GA 30341-3724
phone: 404-718-5902
fax: 770-488-8305
email: Nunderwood@cdc.gov
LIST Of ATTACHMENTS 2
A. Justification
Summary Table 4
A.1. Circumstances Making the Collection of Information Necessary 4
A.2. Purpose and Use of the Information Collection 5
A.3. Use of Improved Information Technology and Burden Reduction 9
A.4. Efforts to Identify Duplication and Use of Similar Information 10
A.5. Impact on Small Businesses or Other Small Entities 10
A.6. Consequences of Collecting the Information Less Frequently 11
A.7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5 11
A.8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency 11
A.9. Explanation of Any Payment or Gift to Respondents 11
A.10. Protection of the Privacy and Confidentiality of Information Provided by Respondents 11
A.11. Institutional Review Board (IRB) and Justification for Sensitive Questions 12
A.12. Estimates of Annualized Burden Hours and Costs 12
A.13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers 13
A.14. Annualized Cost to the Federal Government 13
A.15. Explanation for Program Changes or Adjustments 14
A.16. Plans for Tabulation and Publication and Project Time Schedule 14
A.17. Reason(s) Display of OMB Expiration Date is Inappropriate 15
References
Public Health Service Act
Federal Register Notice
List of OD2A Recipients
Key Informant Interview (KII) Guides
Focus Group (FG) Guides
Permission to be Recorded
Interview Recruitment Email
Focus Group Recruitment Email
Interview Recruitment Reminder Email
Focus Group Recruitment Reminder Email
Post-Information Collection Follow Up Email
Program Manager Focus Group Recruitment Request Email
Program Manager Interview Recruitment Request Email
Privacy Impact Assessment (PIA)
Purpose
of the information collection:
To assess the implementation of the Overdose Data to Action (OD2A)
program (CDC-RFA-CE19-1904) activities, identify
implementation-related factors that may influence the effectiveness
of these activities across jurisdictions, explore innovative
approaches implemented by different jurisdictions, and provide
context to findings
of evaluation metrics and outcomes data.
Intended
use of the resulting data:
Information to be collected will provide crucial data highlighting
the value of different programmatic components of OD2A (e.g.,
strategies and activities) in preventing drug overdose; complete
a valid program implementation evaluation of the OD2A program; and
improve
implementation of OD2A components. This will also provide CDC with
the capacity to respond in a timely manner to requests for
information about the program from the Department of Health and
Human Services (HHS), the White House, Congress, and other sources. Methods
to be used to collect data: Telephone interviews, in-person
focus groups, and virtual focus groups will be conducted to collect
information from respondents. The
subpopulation to be studied: Information will be collected from
respondents from jurisdictions implementing the OD2A program
(Program Managers (PMs), Principal Investigators (PIs),
Surveillance Strategy Leads (SSLs), Prevention Strategy Leads
(PSLs), and other knowledgeable staff members from the
jurisdiction).
How
data will be analyzed: Data collected from this evaluation will
be analyzed using a content/thematic analysis process and
descriptive statistics. Qualitative analysis of narrative responses
will be conducted.
The Centers for Disease Control and Prevention (CDC) National Center for Injury Prevention and Control (NCIPC) requests Office of Management and Budget (OMB) approval for two years for new information collection, to evaluate the implementation of CDC’s Overdose Data to Action (OD2A) program in state and local health departments. The information collection is unique and will be the first evaluation of the implementation of the OD2A program. There are no other efforts that CDC knows of to obtain the program information required to improve implementation and enhance the interpretation of the outcome evaluation of OD2A that will be completed at the end of the OD2A cooperative agreement funding period.
The goal of this evaluation is to assess OD2A program implementation. This evaluation aims to identify implementation-related factors that may influence the effectiveness of these activities across jurisdictions, explore the implementation of innovative approaches implemented by different jurisdictions, and provide context to findings of evaluation metrics and outcomes data.
Deaths involving opioids, including synthetic opioids, increased during 2016-2017. In 2018, opioids were involved in approximately 70% of all drug overdose deaths.1 Complicating matters further is an increase in the combined use of opioids with other illicit drugs, benzodiazepines, and alcohol.2 Such increases in magnitude and complexity of the epidemic highlight the need to generate information necessary to implement an interdisciplinary, comprehensive, and cohesive public health approach to address its trajectory. This new knowledge will inform the strategic deployment and scale up of context-appropriate, data-driven prevention activities.
OD2A is a comprehensive, national overdose prevention program developed by CDC’s NCIPC. The purpose of the OD2A program is to support funded jurisdictions in obtaining high quality, complete, and timely data on opioid prescribing and overdoses, and to use those data to inform prevention and response efforts. The OD2A (CDC-RFA-CE19-1904) mechanism funds a total of 66 recipients (state and local health departments) to implement surveillance and prevention strategies (Exhibit 1) through a three-year cooperative agreement. OD2A funded recipients consist of 47 state-, 16 city/county-, and three district/territory-level jurisdictions. A complete listing of these recipients can be found in Attachment C.
Exhibit 1. OD2A Program’s Ten Funded Strategies
Component |
Strategy |
Surveillance |
1) Collect and disseminate (to audiences as defined in OD2A funded jurisdictions’ data dissemination plans) timely emergency department (ED) data on suspected all drug, all opioid, and heroin overdose |
2) Collect and disseminate (to audiences as defined in OD2A funded jurisdictions’ data dissemination plans) descriptions of drug overdose death circumstances using death certificates and medical examiner/ coroner data |
|
3) Implement innovative surveillance to support NOFO interventions |
|
Prevention |
4) Increase use of prescription drug monitoring programs (PDMPs) |
5) Integrate state and local prevention and response efforts |
|
6) Establish linkages to care (to recipients as defined in the funded jurisdiction work plans) |
|
7) Provide providers and health support systems support |
|
9) Empower individuals to make safer choices |
|
10) Prevention Innovation Projects |
This information collection review (ICR) will use key informant interviews (KIIs) and focus groups (FGs) to evaluate the implementation of the OD2A program. Particularly, these tools will gather data to better understand how jurisdictions are implementing and operationalizing program strategies, explore factors contributing to the success and challenges of program activities, and assess the perceived impact of the program on the trajectory of opioid and other drug overdose prevention across geographically and sociologically disparate regions of the U.S. New information will only be requested when secondary data sources such as recipients’ annual progress reports and work plans cannot be used to address the needs of the program implementation evaluation.
Authority for CDC’s National Center for Injury Prevention and Control (NCIPC) to collect these data is granted by Section 301 of the Public Health Service Act (42 U.S.C. 241). This act gives federal health agencies, such as CDC, broad authority to collect data and participate in other public health activities, including this type of program implementation evaluation (Attachment A).
The purpose of this information collection is to assess the implementation of OD2A program activities. In addition, we aim to identify factors that may influence the effectiveness of the implementation of these activities across jurisdictions, explore innovative approaches implemented by different jurisdictions, and provide context to observations and findings of evaluation metrics and outcomes data. The program implementation evaluation will also identify factors contributing to the success of program activities intended for specific high-risk populations and/or high-burden communities.
CDC will use this information collection to:
Provide crucial data highlighting the value of different programmatic components of OD2A (e.g., strategies and activities) in preventing drug overdose;
Complete a valid program implementation evaluation of the OD2A program; and,
Improve implementation of OD2A components.
To ensure that we obtain a thorough understanding of challenges, successes, and perceived outcomes of the full range of prevention strategies states and communities are implementing, and because of the large number of OD2A award recipients and the broad range of prevention strategies being implemented, information collection activities will include Key Informant Interviews (KIIs) and Focus Groups (FGs) with respondents that span the various prevention activities. Interview and focus group guides were developed to collect information that is not sufficiently captured in any of the existing OD2A programmatic documents, such as recipients’ work plans, evaluation plans, and annual progress reports. KII and FG guides were reviewed by several federal public health professionals within CDC. Feedback from these individuals was used to refine questions as needed and establish the estimated time required to complete each information collection instrument.
In the subsections below, we provide details about the purpose and aim of each type of information collection approach. Exhibit 2 shows the linkage between the evaluation’s key research questions and data collection items included in each set of KIIs and FGs.
Exhibit 2 Evaluation Questions Addressed by Key Informant Interviews and Focus Groups
Data Collection Methods.
Evaluators will conduct KIIs via telephone or a web platform, such as, Cisco WebEx, using a standard interview guide (Attachment D). Evaluators will conduct FGs both in-person and virtually using a standard focus group guide (Attachment E). Following COV 19 guidance, at the time of the interview , social distancing and public health safety measurement will be implemented, including considerations for phone/virtual meetings instead of in -person. KII and FG audio will be recorded to capture conversations accurately. Permission to be recorded will be obtained from all participants before the beginning of the session via email (Attachment F). If a KII participant refuses to give permission to be recorded, information will be recorded by a notetaker or the participant will be allowed to delegate the responsibility to another knowledgeable individual from his or her jurisdiction. If a FG participant refuses to be recorded they will be allowed to delegate the responsibility to another knowledgeable individual from his or her jurisdiction.
Key Informant Interviews
KIIs are a well-established methodology allowing the collection of in-depth information from knowledgeable sources. The purpose of these in-depth interviews is as follows:
To gain an in-depth understanding of how jurisdictions implement and operationalize program activities for their selected strategies and facilitators and barriers of successful implementation;
To explore how recipients, implement and operationalize unique, innovative activities and emerging practices and understand determinants of successful implementation; and,
To help explain why jurisdictions achieved a specific programmatic output or perceived outcome and understand determinants of those achievements.
Sample Selection for Key Informant Interview
KII participants will be purposively sampled allowing for information rich cases to be studied in-depth to facilitate an increased understanding of a phenomenon.3 For the purpose of OD2A, these KIIs will include individuals identified by jurisdiction Program Managers (PMs) based on the individual’s unique first-hand knowledge of the topic to be discussed. Emails with interview guides will be sent to the jurisdiction PM to help with this selection (Attachment L and Attachment M). Exhibit 3 further details the sampling strategy for KIIs.
Exhibit 3: Key Informant Interview Sampling Strategy
KII Set |
Aim |
Information Collection Timing and Sample Size |
Sampling Method |
1 |
To
gain an in-depth understanding of how jurisdictions implement and
operationalize program activities for their selected strategies
and facilitators and barriers of successful implementation. |
Year 1: N=50 (75% of jurisdictions) Year 2: N=50 (75% of jurisdictions*)
*Jurisdictions may be interviewed in both years for different strategies |
Jurisdictions will be sampled based on jurisdiction type to ensure representativeness within all three category types: states; counties/cities; and districts/territories. Most all counties/cities and districts/territories will be included since this is the first year they are funded under an opioid program within CDC.
|
2 |
To explore how recipients, implement and operationalize unique, innovative activities and emerging practices and understand determinants of successful implementation |
Year 1: N=24 Year 2: N=24 |
Criterion sampling will be used to identify jurisdictions that are implementing a unique or innovative activity for a strategy. Jurisdictions will also be stratified by program strategy. For each program strategy, jurisdictions will either be randomly selected or prioritized by stakeholders based on interest in a particular activity that could also be an “emerging or promising practice” in the field. Only three jurisdictions will be interviewed per program strategy. |
3 |
To help explain why jurisdictions achieved a specific programmatic output or perceived outcome and understand determinants of those achievements |
Year 2: N=33 (50% of jurisdictions) |
Criterion sampling will be used to identify jurisdictions based on evaluation metrics and outcomes data. |
Unlike quantitative research, qualitative research does not utilize formulae to determine the minimum sample size required to observe a predefined difference in outcome as statistically significant. Instead, qualitative methodology utilizes theoretical saturation4. The origin of saturation lies in grounded theory and the approach commands acceptance across a range of approaches to qualitative research.5 In this context, saturation means that no additional data are being found whereby one can develop properties of the category. From the perspective of a qualitative analysis, it is the point where no new themes can be generated from the data. Because of the complexity of the OD2A program, reaching saturation will require information collection from relatively large samples. The sample sizes in Exhibit 3 were developed with the aim of providing information representative of the diversity of the 66 jurisdictions’ approaches to addressing each strategy. Programs like OD2A, which are national in scope and include multiple jurisdictions implementing a wide array of interventions that have been tailored by strategy and by local context, are heterogeneous in nature. Even the same evidence-based intervention when implemented under the same strategy will differ in the manner it is implemented across jurisdictions because of differences in local policies and regulations, population characteristics, geography, etc.
In determining the sample sizes for this evaluation we considered the following sources of heterogeneity within our target sample (the 66 jurisdictions): jurisdiction type (state, county, city), census region, program strategy, high-risk population or high-burden community being addressed, program role (e.g., surveillance strategy lead, prevention strategy lead, program manager), Overdose Prevention Capacity Assessment Tool scores, and evaluation outcomes. Other variables considered include the likely heterogeneity of participants in focus groups, the complexity of the interviews and focus group tools, resource availability, and nature of the sampling technique.6,7,8,9,10,11
Focus Groups
The evaluation team will conduct both in-person and virtual FGs with a maximum of 12 people in each FG. To minimize burden on participants, in-person FGs will be held during OD2A annual conferences. Following COV 19 guidance, at the time of the focus group, social distancing and public health safety measurement will be implemented, including considerations for virtual meetings instead of in-person.
The purpose of the FGs is as follows:
To understand jurisdictions’ approaches to using and translating surveillance data to inform prevention activities;
To understand the perceived impact of OD2A on overdose surveillance and prevention efforts;
To understand how OD2A jurisdictions addressed the needs of identified high-burden communities and high-risk populations
Sample Selection for Focus Groups
Evaluators will purposively sample for FGs like sample selection for the KIIs. The method for identifying FG participants will be facilitated by jurisdiction PMs based on their review of the focus group guides sent to them via email (Attachment L). Exhibit 4 details the sampling strategy for FGs.
Exhibit 4: Focus Group Sampling Strategy
FG Set |
Aim |
Information Collection Timing, Number of FGs, Number of Individuals, and Respondent Type |
Sampling Method |
1 |
To understand jurisdictions’ approaches to using and translating surveillance data to inform prevention activities |
Year 1:
*33 total jurisdictions represented (SSL and PSL pairs for each jurisdiction). |
Jurisdictions will be sampled based on jurisdiction type to ensure representativeness within all three category types: states; counties/cities; and districts/territories. Most all counties/cities and districts/territories will be included since this is the first year they are funded under an opioid program within CDC. Jurisdictions will also be sampled to ensure representation from each census region. |
2 |
To understand the perceived impact of OD2A on overdose surveillance and prevention efforts
|
Year 2:
|
All jurisdictions will be invited to participate in this FG set. |
3 |
To understand how OD2A jurisdictions addressed the needs of identified high-burden communities and high-risk populations |
Year 1:
|
Jurisdictions will be sampled based on whether they have activities that address a high-burden or high-risk group and by jurisdiction type to ensure representativeness within all three category types: states; counties/cities; and districts/territories. Most all counties/cities and districts/territories will be included since this is the first year they are funded under an opioid program within CDC. Jurisdictions will also be sampled to ensure representation from each census region |
Embedded within the interview guides are skip patterns which will customize the interview to respondent answers and help minimize overall burden to the respondent. KII data will be collected via telephone or web-platform, such as Cisco WebEx. A portion of FGs will be conducted virtually while others will be conducted in-person.i Collecting data via telephone interviews and virtual FGs will help reduce burden on participants by eliminating travel and minimizing preparation time. Evaluators can verify responses and request clarification in real time as needed during the information collection process.
The purpose of this information collection review (ICR) is focused on collecting complementary data through KIIs and FGs needed to evaluate the implementation of the OD2A program. The information collection is unique and will be the first implementation evaluation of the OD2A program. There are no other efforts that CDC knows of to obtain valid information regarding how recipients operationalized and implemented their chosen prevention activities and how to improve implementation of OD2A. In addition, CDC are not duplicating other federal agency efforts such as SAMHSA as SAMHSA is primarily treatment based, and CDC efforts are focused on prevention.
Information collected for this request through the KIIs and FGs is not available from other data sources such as SAMHSA’s State Targeted Response to the Opioid Crisis Grants (STR) (OMB Control # OMB No. 0930-0378). CDC is aware of the SAMHSA STR program details, recognizes the unique differences in its OMB package from this information collection request, and can confirm that SAMHSA programs do not collect duplicative information. Neither does it duplicate any information currently being collected on the OD2A program such as the Monitoring and reporting for the Overdose Data to Action Cooperative Agreement (OMB Control # 0920-1283).
This OD2A program is adapted from strategies and lessons learned from the following previous CDC funding opportunities 1) Prescription Drug Overdose Prevention for States (PfS) (OMB Control #0920-1155 - Monitoring and reporting system for the prescription drug overdose prevention for states cooperative agreement), 2) Data Driven Prevention Initiative, and 3) Enhanced State Surveillance of Opioid-Involved Morbidity and Mortality. These programs were merged into this one comprehensive, national program called OD2A.
This evaluation will be the first of its kind to collect primary data regarding OD2A strategies. It will focus on assessing the implementation of OD2A program activities. This evaluation also aims to identify implementation-related factors that may influence the effectiveness of these activities across jurisdictions, explore innovative approaches implemented by different jurisdictions, and provide context to observations and findings of evaluation metrics and outcomes data.
KIIs will capture information regarding how jurisdictions implemented and operationalized program strategies and the overall perceived influence or impact of the program while FGs will capture the following:
Collection, use, and translation of surveillance data to inform prevention activities;
Adaptations to address high-burden communities and high-risk populations; and,
Perceived impact of OD2A on surveillance and prevention outcomes (e.g., jurisdictions’ perceptions, experiences, satisfaction with OD2A, and stakeholder insight regarding interpretation of evaluation findings).
No small businesses will be involved or impacted in this information collection.
CDC will collect information annually; both KIIs and FGs will be conducted in year one and two. The present collection will provide the information needed to fully assess the implementation of OD2A program activities. Information will also allow CDC to identify implementation-related factors that may influence the effectiveness of these activities across jurisdictions, explore innovative approaches implemented by different jurisdictions, and provide context to observations and findings of evaluation metrics and outcomes data. The program implementation evaluation will also identify factors contributing to the success of program activities intended for specific high-risk populations and/or high-burden communities.
If no information is collected, CDC will be unable to:
Demonstrate perceived impact of OD2A and different components of OD2A on drug overdose prevention; and,
Inform program improvement efforts.
There are no special circumstances with this information collection package. This request fully complies with the regulation 5 CFR 1320.5.
Federal Register Notice – A 60-day Federal Register Notice was published in the Federal Register on June 15, 2020. Volume 85, Number 115, pp 36206 (Attachment B). There were no public comments.
Efforts to Consult Outside the Agency – The information collection instruments were designed collaboratively by CDC staff and contactors from Booz Allen Hamilton and Abt Associates. KII and FG guides were reviewed by several federal public health professionals within CDC. Feedback from these individuals was used to refine questions as needed and establish the estimated time required to complete each information collection instrument. Many components of this ICR are based on existing tools, feedback from partners, both internal and external.
CDC will not provide payments or gifts to respondents.
The Office of the Chief Information Officer at the CDC has determined that the Privacy Act does not apply to this information collection request. No system of records will be created under the Privacy Act. The Privacy Impact Assessment (PIA) for this evaluation is attached (Attachment N). Some personally identifiable information (PII) will be collected including the respondents’ name, official role, organization, state, and date of interview. All information will be kept on secure, encrypted, password protected servers accessible only to specific project team members. All procedures have been developed, in accordance with federal, state, and local guidelines to ensure that the rights and privacy of respondents will be protected and maintained.
IRB Approval – The CDC National Center for Injury Prevention and Control’s OMB and human subject research officer has determined that IRB approval is not needed for this non-research project (Attachment O).
Sensitive Questions – No information will be collected that are of personal or sensitive nature.
The duration of KIIs will be 60 minutes. This estimate is calculated based on the estimated time to review instructions and address each topic outlined in the KII guides. Duration of FGs will be 90 minutes. Following COV 19 guidance, at the time of the interview, social distancing and public health safety measurement will be implemented, including considerations for phone/virtual meetings instead of in person. Exhibit 5: Estimated Annualized Burden Hours provided estimates of total burden hours.
Exhibit 5: Estimated Annualized Burden Hours
Type of Respondent |
Form Name |
Number of Respondents |
Number of Responses Per Respondent |
Average Burden Per Response (Hours) |
Total Burden (Hours) |
|
Jurisdictions implementing OD2A program (e.g. PMs, PIs, SSLs, PSLs, Partners, or Stakeholders) |
Key Informant Interview Guides (Att. D) |
181 |
1 |
1 |
181 |
|
Focus Group Guides (Att. E) |
165 |
1 |
90/60 |
248 |
|
|
Permission to be Recorded (Att. F) |
346 |
1 |
5/60 |
29 |
|
|
Interview Recruitment Email (Att. G) |
181 |
1 |
5/60 |
15 |
|
|
Focus Group Recruitment Email (Att. H) |
165 |
1 |
5/60 |
14 |
|
|
Interview Recruitment Reminder Email (Att. I) |
181 |
1 |
5/60 |
15 |
|
|
Focus Group Recruitment Reminder Email (Att. J) |
165 |
1 |
5/60 |
14 |
|
|
Post-information Collection Follow up Email (Att. K) |
346 |
1 |
5/60 |
29 |
|
|
Program Manager Focus Group Recruitment Request Email (Att. L) |
165 |
1 |
5/60 |
14 |
|
|
Program Manager Interview Recruitment Request Email (Att. M) |
181 |
1 |
5/60 |
15 |
|
|
Total: 574 |
Exhibit 6 shows the estimated annualized cost burden based on the respondents’ time to complete the information collection forms. Average hourly wage rates were calculated using mean wages from the U.S. Department of Labor, Bureau of Labor Statistics https://www.bls.gov/oes/current/oes_stru.htm.
Exhibit 6: Estimated Annualized Burden Hours
Type of Respondent |
Form Name |
Number of Respondents |
Number of Responses per Respondent |
Total Burden (Hours) |
Average Hourly Wage Rate |
Total Cost Burden |
Jurisdictions implementing OD2A program (e.g. PMs, PIs, SSLs, PSLs, Partners, or Stakeholders) |
Key Informant Interview Guides (Att. D) |
181 |
1 |
181 |
$24.34* |
$4,405.54 |
Focus Group Guides (Att. E) |
165 |
1 |
248 |
$24.34* |
$6,036.32 |
|
Permission to be Recorded (Att. F) |
346 |
1 |
29 |
$24.34* |
$705.86 |
|
Interview Recruitment Email (Att. G) |
181 |
1 |
15 |
$24.34* |
$365.10 |
|
Focus Group Recruitment Email (Att. H) |
165 |
1 |
14 |
$24.34* |
$340.76 |
|
Interview Recruitment Reminder Email (Att. I) |
181 |
1 |
15 |
$24.34* |
$365.10 |
|
Focus Group Recruitment Reminder Email (Att. J) |
165 |
1 |
14 |
$24.34* |
$340.76 |
|
Post-information Collection Follow up Email (Att. K) |
346 |
1 |
29 |
$24.34* |
$705.86 |
|
Program Manager Focus Group Recruitment Request Email (Att. L) |
165 |
1 |
14 |
$24.34* |
$340.76 |
|
Program Manager Interview Recruitment Request Email (Att. M) |
181 |
1 |
15 |
$24.34* |
$365.10 |
|
Total: $13,971.16 |
* Average hourly wage rate calculated using mean wages for 00-0000 All Occupations from the National Compensation Survey: Occupational wages in the United States May 2017 “U.S. Department of Labor, Bureau of Labor Statistics:” 018.
No capital or maintenance costs are expected. Additionally, there are no start-up, hardware, or software costs.
The total estimated cost to the federal government is $1,143,541.50 (Contract 200-2019-F-06952) including salary, fringe, travel, and supply expenses related to the involvement of four federal employees to devote 5-10% FTE to the project. There are no equipment or overhead costs. Exhibit 7 describes how this cost estimate was calculated.
Exhibit 7: Estimated Annualized Cost to the Federal Government
Type of Cost |
Description of Services |
Annual Cost |
Contractor |
Information collection, data analysis, project management |
$1,117,604.00 |
Five technical monitors at 5% FTE each (CDC) |
Study planning and contractor oversight
|
$ 25,775.00 |
Total Annual Estimated Costs |
$1,143,541.50 |
This is a new information collection.
The exact start date for information collection activities is contingent on the OMB clearance date. Data from the audio recordings of KIIs and FGs will be transcribed using a web-platform, such as Cisco WebEx. Some PII will be collected and all information will be kept on secure, encrypted, password protected external servers accessible only to specific project team members. The contractor will remove all potential identifiers and share only the de-identified data with CDC. No PII will be distributed. Once analysis is completed, all audio files will be deleted.
A codebook will be developed and consist of deductive and inductive codes, their definitions, and inclusion and exclusion criteria for applying the codes. The research team will develop the preliminary coding structure using a deductive approach, meaning it will be grounded in the literature, including conceptual, participant, relationship, and setting codes12. Deductive content analysis will be used as the primary research method to condense words into fewer content-related categories and provide knowledge, new insights, and a guide for action.13 Inductive content will be identified in a secondary analysis of the text to identify “emergent” codes that represent key concepts discussed by participants.
The qualitative data will then be coded and analyzed thematically to identify key themes that emerged across groups of interviews using NVivo 12 software. The team will pilot code several transcripts independently and compare coding decisions among experienced qualitative researchers. The group of coders will discuss discrepancies and build consistency accordingly. Coding will be iterative and include deductive codes (those that are established a priori from the evaluation questions’ indicators and domains) and inductive codes (those that emerge from the data). The group of coders will meet weekly during the coding process to review interpretations, resolve discrepancies, and add or collapse codes as needed. After all transcripts and documents are coded, the team will analyze the data to identify the range of opinions and topics, common themes across and within groups, and themes unique to each group. Quality assurance procedures include the training of coders, checking inter-rater reliability, and frequent debriefs on findings and coding questions.
The research team will use two different indices to assess inter-rater reliability: Cohen’s kappa and percent intercoder agreement. Inter-rater reliability will identify low-reliability on specific nodes between coders and the percentage of agreement. Cohen’s kappa was selected based on its wide acceptance across the social sciences research field as an appropriate measure of agreement between two coders. Cohen’s kappa coefficient reflects the degree of similarity between coders in assigning the same code to the same piece of text; it takes into account that agreement between coders might occur due to chance and is therefore a more conservative measure of agreement14. A Cohen’s Kappa Coefficient value of over 0.75 can be interpreted as excellent agreement; we suggest reaching reliability of over 0.80 to confirm consistent use of codes.
Findings will be published in a peer reviewed journal. Once published, there will be a link to the publication on the CDC’s webpage. Findings will also be disseminated to OD2A-funded recipients through annual reports and other communication channels.
The project time schedule for the OD2A evaluation is presented in Exhibit 8.
Exhibit 8. Project Time Schedule for the OD2A Evaluation Information Collection
Activities |
Time Schedule |
Participant recruitment |
2 weeks post OMB approval |
Facilitate conference group sessions |
Complete 2 months post OMB approval |
Conduct interview sessions |
Complete 2 months post OMB approval |
Data Analysis |
Complete 3 months post OMB approval |
Outline of interim implementation evaluation/outcome evaluation |
Complete 4 months post OMB approval |
Create draft of interim evaluation/outcome report |
Complete 5 months post OMB approval |
Create final interim evaluation/outcome report |
Complete 6 months post OMB approval |
Create final white paper |
Complete 7 months post OMB approval |
Revise data collection tools |
Complete 10 months post OMB approval |
Participant recruitment |
Complete 10 months post OMB approval |
Facilitate conference group sessions |
Complete 12 months post OMB approval |
Conduct interview sessions |
Complete 13 months post OMB approval |
Data Analysis |
Complete 14 months post OMB approval |
Outline of interim implementation evaluation/outcome evaluation |
Complete 15 months post OMB approval |
Create draft of interim evaluation/outcome report |
Complete 16 months post OMB approval |
Create final interim evaluation/outcome report |
Complete 17 months post OMB approval |
Create final manuscript |
Complete 18 months post OMB approval |
The display of the OMB expiration date is appropriate.
A.18. Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exceptions to the Paperwork Reduction Act.
i Following COV 19 guidance, at the time of the focus group, social distancing and public health safety measurement will be implemented, including considerations for virtual meetings instead of in-person.
1 Wilson N, Kariisa M, Seth P, Smith H IV, Davis NL. Drug and Opioid-Involved Overdose Deaths — United States, 2017–2018. MMWR Morb Mortal Wkly Rep 2020;69:290–297. DOI: http://dx.doi.org/10.15585/mmwr.mm6911a4external icon
2 Other Drugs. Atlanta, GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control; 2019. Available at https://www.cdc.gov/drugoverdose/data/otherdrugs.html
3 Patton MQ. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: Sage Publications; 2002
4 Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Handbook of practical program evaluation. John Wiley & Sons.
5 Glaser BG, Strauss AL. The Discovery of Grounded Theory: Strategies for Qualitative
Research. Chicago: Aldine; 1967.
6 Coenen, M., Stamm, T. A., Stucki, G., & Cieza, A. (2012). Individual interviews and focus groups in patients with rheumatoid arthritis: a comparison of two qualitative methods. Quality of Life Research, 21(2), 359-370. doi:10.1007/s11136-011-9943-2
7 Francis, J. J., Johnston, M., Robertson, C., Glidewell, L., Entwistle, V., Eccles, M. P., & Grimshaw, J. M. (2010). What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychology & Health, 25(10), 1229-1245. doi:10.1080/08870440903194015
8 Guest, G., Bunce, A., & Johnson, L. (2006). How Many Interviews Are Enough?: An Experiment with Data Saturation and Variability. Field Methods, 18(1), 59–82. https://doi.org/10.1177/1525822X05279903
9 Hagaman, A. K., & Wutich, A. (2017). How Many Interviews Are Enough to Identify Metathemes in Multisited and Cross-cultural Research? Another Perspective on Guest, Bunce, and Johnson’s (2006) Landmark Study. Field Methods, 29(1), 23–41.
10 Namey, E., Guest, G., McKenna, K., & Chen, M. (2016). Evaluating Bang for the Buck: A Cost-Effectiveness Comparison Between Individual Interviews and Focus Groups Based on Thematic Saturation Levels. American Journal of Evaluation, 37(3), 425–440.
11 Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Handbook of practical program evaluation. John Wiley & Sons.
12 Bradley, E., Curry, L., & Devers, K. (2007). Qualitative data analysis for health services research: Developing taxonomy, themes, and theory. Health Services Research, 42(4), 1758-1772.
13 Elo, S., & Kyngas, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107-115.
14 McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia medica, 22(3), 276-282.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Emily Wharton |
File Modified | 0000-00-00 |
File Created | 2021-01-13 |