July 29, 2015
Mr. Leroy A. Richardson
Information Collection Review Office Centers for Disease Control and Prevention 1600 Clifton Road, N.E. MS-D74
Atlanta, GA 30329
Re: Docket No. CDC-2015-0044
Dear Mr. Richardson:
The proposed data collection appears to pertain to a data instrument already being used by the grantees and sub-grantees of the Centers for Disease Control and Prevention (CDC) National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) program, Preventing Type 2 Diabetes Among People at High Risk Primary Prevention. The evidence-based lifestyle change program for preventing type 2 diabetes involves people who are prediabetic and enrolled in a year-long program (16 weekly sessions followed by monthly sessions for 6 to 8 months). As a grantee under the cooperative agreement (# CDC-RFA-DP12-121204) since 2012, AHIP is pleased to provide our perspective and experience in utilizing the collection instrument, "Formative and Summative Evaluation of the National Diabetes Prevention Program" and offers the following comments about the CDC's proposed data collection instrument. It is not clear from the published information whether the proposed data collection instrument is the same one that has been used in the past, as the questions are not listed in this Notice.
AHIP agrees that diabetes continues to take a significant toll on the health of the nation, and is supportive of the Administration's efforts to intervene with a number of methods to prevent the onset of diabetes and reduce complications related to this disease, including the National DPP. AHIP has participated in the cooperative agreement with the CDC since its inception in 2012, and currently has four member health plans actively participating as sub-grantees implementing and offering the program. Throughout the year-long National DPP intervention at the four health plans, life style coaches use the CDC-approved curriculum to teach life style changes aimed at increasing participant's exercise to at least 150 minutes a week and hopefully resulting in a 5% to 7% loss of a participant's body weight.
Necessity for the proposed collection of information and its practical utility
Considering the four stated reasons to collect the information with the instrument -(1) to expand the reach and sustainability of the NDPP, (2) to ensure the quality of the program
as it is offered within communities, (3) to increase referrals, and (4) to secure sustained commitment among insurance providers to reimburse organizations for the program -the proposed data collection approach does not clearly define nor always support the stated factors. The instrument proposed fails to collect information on clinical outcomes related to the incidence of prediabetes.
In question #4, the instrument asks about purposeful targeting of NEW disparate or vulnerable populations without clarifying what is meant by "purposeful targeting" or "NEW disparate or vulnerable populations." The concern about lack of clarity around the terms, "disparate" or "vulnerable" was raised to the CDC I RTI with previous survey instruments, and remains unaddressed. It has been the case that different respondents have viewed members of racial or ethnic groups as falling into these categories, while other respondents have not considered individuals with the same racial and ethnic demographics as being encompassed by those terms.
In question # 12, the total number of marketing materials distributed is requested, but the question fails to account for digital reach to potential insurers, employers, and participants. It is not clear that this question and the use of "distributed" would include the number of website visits, participants on webinars, and followers of electronic and social media messages via Twitter. Furthermore, the instrument does not ask the total number of marketing materials distributed related to enrollment or participation, and does not ask which marketing materials or tactics were most successful for their intended audience.
Four questions potentially call for the collection of competitively sensitive information, and AHIP and its members are mindful of the guidance that the Federal Trade Commission (FTC) and the Department of Justice (DOJ) have provided with respect to the collection and dissemination of such information. We believe that the issues raised by three of the questions can be mitigated by clarification, while it might be best to omit the fourth question.
Question 22 (how much do you charge participants for the program), could involve the collection of competitively sensitive information if that information is not already public. We suggest revising the second sentence in the question to read as follows "If so, and the amount is publicly available, how much do you charge?" Questions 31 (the average full time salary of a lifestyle coach) and 33 (the average full-time salary of a program coordinator) would involve competitively sensitive information if reflecting company specific information, but would not if they are derived from general sources such as geographic wage indices. Therefore, we suggest adding the following language to the end of both of these questions "in the region in which you operate, based on geographic wage indices and other public information."
There is not a similar clarification that would mitigate the competitively-sensitive nature of Question 22 (average cost per participant amongst your program sites). Therefore, we recommend omitting this question from the instrument. Alongside price information, non-public, company-specific cost information has be recognized by the FTC and the DOJ as an area of sensitivity, and the safest course would be to avoid its collection.
Accuracy of the Agency's estimate of the burden of the proposed collection of information and Minimizing the burden of the collection of information on respondents
Having the data assessment tool as a spreadsheet which must be disseminated to sub grantees is cumbersome. A web-based collection tool would allow for the grantee and sub-grantees to navigate, save the responses and then transmit the information once completed.
The process of gathering this information and entering it into a tool is more than 30 minutes per site. The previous excel sheet for sites included 20 questions, and the new collection instrument includes 36 questions. Therefore, the estimate of the time burden should also be increased from the previous estimate. The collection of demographic information, aspects of implementation, tactics that were successful and remaining challenges cannot be accomplished in such a short timeframe. Gathering participant recruitment methods and metrics often involves requesting information from other departments in a sub-grantee's organization.
As it now stands, grantee staff must collate data for submission as well as review the sub grantee data assessments. The estimate that the process would require on average 8 hours of the grantee's time also under estimates the time actually required to complete the instrument. A review of previously submitted project plans and progress reports may be required to accurately complete the data collection tool. Also, the time a grantee organization spends may be impacted by the number of sites and sub-grantees. It is not clear that these factors were considered when determining the time burden to complete the instrument.
Ways to enhance the quality, utility, and clarity of the information to be collected
The proposed project plan stated that the "CDC plans to collect information needed to evaluate the role of program-level factors on the effectiveness of National DPP efforts and to identify best practices." It is not clear that the best practices will be identified from the information collected. As seen from the Year 01 and Year 02 data collections, the correlation between recruitment strategies and delivery models is not always accurately portrayed from merely a collection of data. The data collection instruments do not tie successful recruitment mechanisms to other variables, such as gender, age, race, socio economic factors, and does not characterize the intensity of a strategy with an outcome.
For instance, the instrument lists more than I 0 recruitment methods that can be used for participant enrollment and retention. The instrument does not look at the frequency, reach, and the success of recruitment methods used.
The following questions should be reviewed to enhance the quality, utility, and clarity of the information received:
Question #9 “Where or at what level are you targeting participant recruitment efforts?” It is not clear what is meant by “level” – individual, group, and employer, decision maker for a group (such as a CEO, a human resources or benefits specialist).
Question # 11 "What materials, if any, were used to help drive participants to the site lifestyle change programs? It is not clear what is meant by "help drive participants to the site"- does this refer to physically transport, to encourage participants to visit a website site, or to visit a location where the National DPP is being held?
Question 26 "For the current grant year, please provide the number of employers who were educated about the benefits and cost-savings of the evidence-based lifestyle change program as a covered health benefit for employees. Question 26 fails to account for the potential number of covered lives by asking only about the number of employers. One employer may be a state or county office, a large company, and thus, may have a greater impact than can be measured by just asking for the number of employers that a grantee or sub-grantee has educated.
Question 27 "For the current grant year, please provide the number of employers who offer the National DPP program on-site." Question 27 fails to account for the potential number of covered lives by asking only about the number of employers. One employer may be a state or county office, a large company, and thus, may have a greater impact than can be measured by just asking for the number of employers that a grantee or sub-grantee has educated.
The information collected does not seek data on whether participants, on average, have changed from a prediabetes diagnosis to eliminating their risk for the disease. An Ale test (also known as HbA1C or glycated hemoglobin) provides a general indication of diabetes control. While A1c levels are one stated criteria for an individual being able to participate in a National DPP, there is no analysis of whether the A1c levels have changed or have been impacted at the conclusion of the intervention. This instrument does not seek to understand weight loss percentage, physical activity minutes, A1c levels or anything related to clinical measures and clinical quality. The instrument's emphasis is focused on process for implementation instead of clinical patient outcomes. The reduction of blood glucose levels is critical information for insurers and employers to know when evaluating the merits of an evidence-based intervention. The paucity of clinical outcomes measures in this instrument presents a potential obstacle to greater uptake and coverage for the National DPP.
Thank you for the opportunity to comment on this instrument.
Sincerely,
Kate Berry
Senior Vice President, Clinical Affairs and Strategic Partnerships
CDC Response
___________________________________________________________________
Kate Berry
Senior Vice President, Clinical Affairs and Strategic Partnerships
America’s Health Insurance Plans
601 Pennsylvania
Avenue, NW
South Building
Suite 500
Washington, DC
20004
Dear Ms. Berry,
Thank you for taking the time to review and comment on the "Formative and Summative Evaluation of the National Diabetes Prevention Program." The Division of Diabetes Translation appreciates your commitment to improving the delivery of diabetes prevention programs and the assessment of the six national organizations’ work under Funding Opportunity Number: DP12-1212PPHF12. All of your comments were carefully considered. Included with this letter you will find specific responses to your suggestions and remarks in your letter dated July 29, 2015.
With the growing number of cases of type 2 diabetes, it is vital that we continue to implement evidence-based programs that scale and sustain the National Diabetes Prevention Program. Your work is important to this process. Again, thank you for your interest in the National Diabetes Prevention Program.
Sincerely,
Division of Diabetes Translation
Centers for Disease Control and Prevention
Note: (C) indicates a comment, suggestion, or request for clarification from your organization; (R) indicates a response from CDC.
AHIP comments and CDC’s responses:
(C): Considering the four stated reasons to collect the information with the instrument -1) to expand the reach and sustainability of the NDPP, (2) to ensure the quality of the program as it is offered within communities, (3) to increase referrals, and (4) to secure sustained commitment among insurance providers to reimburse organizations for the program - the proposed data collection approach does not clearly define nor always support the stated factors. The instrument proposed fails to collect information on clinical outcomes related to the incidence of prediabetes.
(R): The primary objective of the “Formative and Summative Evaluation of the National DPP Cooperative Agreement DP12-1212” is to discern lessons learned and effective strategies around scaling and sustaining the National DPP so it is accessible to individuals most in need of this intervention. The data collection instrument is intended to collect information about program-level factors that affect implementation outcomes such as participant recruitment and insurance provider/employer engagement strategies. The information gained will inform technical assistance to help programs increase their reach, sustainability, referrals, and program support. Linking clinical outcomes to the incidence of prediabetes is outside the scope of this assessment, as this assessment is not intended to collect participant-level clinical data.
(R): Please note that we have made changes to the statement of the goals of data collection which may address some of your concerns: To clarify this point, we will use the following language: CDC will use the information gained from the assessment to discern lessons learned and effective strategies around 1) expanding the reach and sustainability of the National DPP lifestyle change programs, 2) improving recruitment and retention efforts, 3) increasing referrals, and 4) securing sustained commitment among insurance providers and employers to either reimburse organizations providing the program or providing an employee benefit option for the program so it is accessible to individuals most in need of this intervention.
(C): In question #4, the instrument asks about purposeful targeting of NEW disparate or vulnerable populations without clarifying what is meant by "purposeful targeting" or "NEW disparate or vulnerable populations." The concern about lack of clarity around the terms, "disparate" or "vulnerable" was raised to the CDC/ RTI with previous survey instruments, and remains unaddressed. It has been the case that different respondents have viewed members of racial or ethnic groups as falling into these categories, while other respondents have not considered individuals with the same racial and ethnic demographics as being encompassed by those terms.
(R): Following AHIP’s feedback previously on this issue via conference calls, we have created a glossary of terms to accompany the data collection spreadsheet that better define “disparate and vulnerable.” In addition, it is the intent of this assessment that grantees define which populations they consider disparate or vulnerable if they feel the glossary does not accurately reflect their audience(s).
(C): In question #12, the total number of marketing materials distributed is requested, but the question fails to account for digital reach to potential insurers, employers, and participants. It is not clear that this question and the use of "distributed" would include the number of website visits, participants on webinars, and followers of electronic and social media messages via Twitter. Furthermore, the instrument does not ask the total number of marketing materials distributed related to enrollment or participation, and does not ask which marketing materials or tactics were most successful for their intended audience.
(R): CDC has reworded Question #12 of the grantee-level spreadsheet to allow respondents to list the number(s) of marketing materials used/distributed, including social media/web postings.
(R): CDC purposefully does not ask about “which marketing materials or tactic were most successful” as this information is best determined through data analyses that CDC will conduct and include in each grantee’s annual reports.
(C): Four questions potentially call for the collection of competitively sensitive information, and AHIP and its members are mindful of the guidance that the Federal Trade Commission (FTC) and the Department of Justice (DOJ) have provided with respect to the collection and dissemination of such information. We believe that the issues raised by three of the questions can be mitigated by clarification, while it might be best to omit the fourth question.
(C): Question 22 (how much do you charge participants for the program), could involve the collection of competitively sensitive information if that information is not already public. We suggest revising the second sentence in the question to read as follows "If so, and the amount is publicly available, how much do you charge?"
(R): The intent of Question #21 of the site-level spreadsheet (Do you charge participants for the program? If so, how much do you charge?) is to average the costs across grantee sites for CDC to have an idea of cost burden. Resulting data will not be publically available by named grantee. Many grantees freely report this information to CDC for assessment purposes under this funding opportunity announcement. We will change the question to read: “Do you charge participants for the program? If so, and if you are able to report this data, how much do you charge on average per participant?” We will include “not able to report” as a response option.
(C): Questions 31 (the average fulltime salary of a lifestyle coach) and 33 (the average full-time salary of a program coordinator) would involve competitively sensitive information if reflecting company specific information, but would not if they are derived from general sources such as geographic wage indices. Therefore, we suggest adding the following language to the end of both of these questions "in the region in which you operate, based on geographic wage indices and other public information."
(R): The intent of Questions #29 (the average fulltime salary of a lifestyle coach) and #31 (the average full-time salary of a program coordinator) of the site-level spreadsheet is to average the costs across grantee sites for CDC to have an idea of cost burden. Resulting data will not be publically available by named grantee. Many grantees freely report this information to CDC for assessment purposes under this funding opportunity announcement.
(R): We will change Question #29 of the site-level spreadsheet to read: “What is the average salary of a lifestyle coach, if you are able to report this data?” We will include “not able to report” as a response option.
(R): We will change Question #31 of the site-level spreadsheet to read: “What is the average salary of a program coordinator, if you are able to report this data?” We will include “not able to report” as a response option.
(C): There is not a similar clarification that would mitigate the competitively-sensitive nature of Question 22 (average cost per participant amongst your program sites). Therefore, we recommend omitting this question from the instrument. Alongside price information, non-public, company-specific cost information has be recognized by the FTC and the DOJ as an area of sensitivity, and the safest course would be to avoid its collection.
(R): Question #21 of the site-level spreadsheet (Do you charge participants for the program? If so, how much do you charge?) is intended to discern lessons learned that relate to program sustainability and scalability. This information is important for planning purposes. Without the ability to estimate the costs of implementing a program, new sites have limited ability to plan. CDC uses these data in two ways: First, for grantee-level reports as a means of technical assistance to individual grantees like yourself. Secondly, in an aggregated manner (i.e., average costs/salaries) across all six grantees. The cross-grantee annual reports will not identify individual grantee charges for program participation. Thus, no other grantee will know how much AHIP, or its affiliate members, charges under this funding opportunity announcement. CDC has already agreed to change the wording of this question (please see comment above related to Question #21). Thus, we do not feel it is necessary to “avoid” collecting this data.
(C): Having the data assessment tool as a spreadsheet which must be disseminated to sub-grantee is cumbersome. A web-based collection tool would allow for the grantee and sub-grantees to navigate, save the responses and then transmit the information once completed.
(R): CDC made the decision to use an Excel spreadsheet as it is in common use and does not require much additional training, if any, to use. Additionally, the format question of the data collection instrument was discussed during a monthly grantee call in early 2015 and it was unanimously decided that an Excel spreadsheet would require the least orientation and training. We do understand that multiple tabs for each grantee site within one Excel spreadsheet means that each site needs to complete data entry separately, save it, and send it back to the grantee. Thus, CDC will be happy to provide each grantee site with individual Excel spreadsheets with prepopulated information (i.e., grantee name, site name, site code, grant year, and fiscal year) for their existing intervention sites to be disseminated.
(C): The process of gathering this information and entering it into a tool is more than 30 minutes per site. The previous excel sheet for sites included 20 questions, and the new collection instrument includes 36 questions. Therefore, the estimate of the time burden should also be increased from the previous estimate. The collection of demographic information, aspects of implementation, tactics that were successful and remaining challenges cannot be accomplished in such a short timeframe. Gathering participant recruitment methods and metrics often involves requesting information from other departments in a sub-grantee's organization.
(R): Please bear in mind this is intended to be an estimate and may vary so we are acknowledging that a range of response time from 30 – 60 minutes, with an average of 45 minutes, is appropriate and we have added that to the burden statement. Please note that there are a total of 34 questions for the existing sites within the site-level spreadsheet, and there are only 26 questions for new sites that started in Year 3 of the funding opportunity announcement. In addition, three questions will be pre-populated by CDC with grantee and site information for ease of reportin
(C): As it now stands, grantee staff must collate data for submission as well as review the subgrantee data assessments. The estimate that the process would require on average 8 hours of the grantee's time also under estimates the time actually required to complete the instrument. A review of previously submitted project plans and progress reports may be required to accurately complete the data collection tool. Also, the time a grantee organization spends may be impacted by the number of sites and sub-grantees. It is not clear that these factors were considered when determining the time burden to complete the instrument.
(R): Please bear in mind this is intended to be an estimate and may vary so we are acknowledging that a range of response time is appropriate. We are acknowledging that it may take up 12 hours to complete. We have added that to the burden statement.
(C): The proposed project plan stated that the "CDC plans to collect information needed to evaluate the role of program-level factors on the effectiveness of National DPP efforts and to identify best practices." It is not clear that the best practices will be identified from the information collected. As seen from the Year 01 and Year 02 data collections, the correlation between recruitment strategies and delivery models is not always accurately portrayed from merely a collection of data. The data collection instruments do not tie successful recruitment mechanisms to other variables, such as gender, age, race, socioeconomic factors, and does not characterize the intensity of a strategy with an outcome.
(R): CDC recognizes that Years 1 and 2 of Funding Opportunity Number: DP12-1212PPHF12, were initial start-up years. Any data collected during those years’ informed new variables and new methods of determining program strategies. We are now in Year 3 and approaching Year 4 of the funding opportunity; thus, CDC can ensure AHIP that the level of statistical analyses (including correlations, multiple regression and hierarchical modeling, etc.) have changed. They have changed due to the increase in the number of sites reporting data; the increase in the number of data points; the increase in the number of program participants; and, the fact that the data will be longitudinal across multiple years. Thus, in subsequent grantee-level annual reports, AHIP will see improved intensity and correlation of strategies.
(R): Participant characteristics including age, gender, and race/ethnicity are being included in these models. These “variables” come from the site-level data already reported to the Diabetes Prevention Recognition Program (DPRP). We do not ask that grantee sites repeat these data when we are able to gather them from another data source. This is to reduce the burden on grantees and their sites.
(R): Please note that when AHIP submitted these comments, they did not have access to the narrative statement that accompanied the Federal Registry Notice (FRN) explaining that CDC collects different information about the program through different clearance processes. Participant-level data are collected via the DPRP. Grantee and site-level data are collected separately via this mechanism.
(C): For instance, the instrument lists more than 10 recruitment methods that can be used for participant enrollment and retention. The instrument does not look at the frequency, reach, and the success of recruitment methods used.
(R): CDC agrees that information on frequency and reach of recruitment methods would be very desirable to collect, however CDC feels that the burden required to collect that data across all grantee sites is not justified in this program assessment.
(C): The following questions should be reviewed to enhance the quality, utility, and clarity of the information received:
(C): Question #9 "Where or at what level are you targeting participant recruitment efforts?" It is not clear what is mean by "level" - individual, group, employer, decision maker for a group (such as a CEO, a human resources or benefits specialist).
(R): Based on previous grantee feedback, the response options to this question have been better clarified in the most recent spreadsheet along with the glossary of terms accompanying the data collection instrument. CDC assures AHIP that these are updated.
(C): Question # 11 "What materials, if any, were used to help drive participants to the site’s lifestyle change programs?" It is not clear what is meant by "help drive participants to the site"- does this refer to physically transport, to encourage participants to visit a website site, or to visit a location where the National DPP is being held?
(R): This question refers to use of participant recruitment materials by the grantee to direct potential participants to the program. This could be directing them to a website, a contact person/phone number, or a flyer about where class locations are listed. The response options for Question #11 are clarified in the glossary.
(C): Question 26 "For the current grant year, please provide the number of employers who were educated about the benefits and cost-savings of the evidence-based lifestyle change program as a covered health benefit for employees. Question 26 fails to account for the potential number of covered lives by asking only about the number of employers. One employer may be a state or county office, a large company, and thus, may have a greater impact than can be measured by just asking for the number of employers that a grantee or sub-grantee has educated.
(R): CDC agrees that number of covered lives is important and, therefore, asks that this data be reported in the grantees’ annual progress reports that go to the Procurement and Grants Office. We do not ask that grantees repeat these data when we are able to gather them from another data source. This is to reduce the burden on grantees and their sites.
(C): Question 27 "For the current grant year, please provide the number of employers who offer the National DPP program on-site." Question 27 fails to account for the potential number of covered lives by asking only about the number of employers. One employer may be a state or county office, a large company, and thus, may have a greater impact than can be measured by just asking for the number of employers that a grantee or sub-grantee has educated.
(R): CDC agrees that number of employers is important and, therefore, asks that this data be reported in the grantees’ annual progress reports that go to the Procurement and Grants Office. We do not ask that grantees repeat these data when we are able to gather them from another data source. This is to reduce the burden on grantees and their sites.
(C): The information collected does not seek data on whether participants, on average, have changed from a prediabetes diagnosis to eliminating their risk for the disease. An Alc test (also known as HbA 1C or glycated hemoglobin) provides a general indication of diabetes control. While A1c levels are one stated criteria for an individual being able to participate in a National DPP, there is no analysis of whether the Alc levels have changed or have been impacted at the conclusion of the intervention. This instrument does not seek to understand weight loss percentage, physical activity minutes, A1c levels or anything related to clinical measures and clinical quality. The instrument's emphasis is focused on process for implementation instead of clinical patient outcomes. The reduction of blood glucose levels is critical information for insurers and employers to know when evaluating the merits of an evidence-based intervention. The paucity of clinical outcomes measures in this instrument presents a potential obstacle to greater uptake and coverage for the National DPP.
(R): It is beyond the scope of this grantee program assessment to determine a change “from a prediabetes diagnosis to eliminating their risk for the disease.” The one-year lifestyle change program has been proven effective in the original 2002 Diabetes Prevention Program randomized control trial and years’ of follow-up efficacy and effectiveness studies (see the recent Community Guide Task Force findings for a review of the findings and evidence- http://www.thecommunityguide.org/diabetes/combineddietandpa.html). Thus, the purpose of the community-based implementation of the National DPP is to deliver the program with fidelity to the 2015 DPRP Standards and Operating Procedures which are based on the aforementioned evidence (found here- http://www.cdc.gov/diabetes/prevention/recognition/standards.htm). If delivered with fidelity, we know that the ultimate outcomes of a 5-7% weight loss and improvement in physical activity minutes will result in preventing or delaying type 2 diabetes for a significant number of the participants. The purpose of this grantee program assessment is not to repeat the trials and studies, but to implement effective community programs.
(R): The focus of this data collection is formative and summative evaluation of the grantee implementation. CDC does look at certain de-identified biometric outcomes. The grantee assessment is not designed to collect information on “A1c levels” or other “blood glucose levels” of participants. Such lab tests are beyond the scope of this program assessment and not necessary to assess program fidelity.
(R): CDC does look at certain de-identified biometric outcomes. Regarding “weight loss percentage, physical activity minutes,” CDC does collect these “clinical measures” via the DPRP. We do not ask that grantee sites repeat these data when we are able to gather them from another data source. This is to reduce the burden on grantees and their sites. Please note that these data are also correlated with grantee program assessment data and provided in grantee annual reports. Such information was previously provided to all grantees, including AHIP, for Years 1 and 2. We will do the same in Years 3 and 4.
(R): Please note that when AHIP submitted these comments, they did not have access to the narrative statement that accompanied the Federal Register Notice (FRN) explaining that CDC collects different information about the program through different clearance processes. Participant-level data are collected via the DPRP. Grantee and site-level data are collected separately via this mechanism.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Tardif-Douglin, Miriam |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |