SUPPORTING STATEMENT
PART A
FOR OMB CLEARANCE PACKAGE
Submitted by:
Ingrid J. Hall, Ph.D., MPH
iah9@cdc.gov
(770) 488-3035
Supported by:
Division of Cancer Prevention and Control
National Center for Chronic Disease Prevention and Health Promotion
Centers for Disease Control and Prevention
Atlanta, Georgia
TABLE OF CONTENTS
1. Circumstances Making the Collection of Information Necessary
2. Purpose and Use of the Information Collection
3. Use of Information Technology and Burden Reduction
4. Efforts to Identify Duplication and Use of Similar Information
5. Impact on Small Businesses or Other Small Entities
6. Consequences of Collecting the Information Less Frequently
7. Special Circumstances Relating to Guidelines of 5 CFR 1320.5
8. Comments in Response to the Federal Register
Notice and Efforts
to Consult Outside the Agency
9. Explanation of Any Payment or Gift to Respondents
10. Assurance of Confidentiality Provided to Respondents
11. Justification for Sensitive Questions
12. Estimates of Annualized Burden Hours and Costs
13. Estimates of Other Total Annual Cost Burden to Respondents or Record Keepers
14. Annualized Cost to the Federal Government
15. Explanation for Program Changes or Adjustments
16. Plans for Tabulation and Publication and Project Time Schedule
17. Reason(s) Display of OMB Expiration Date is Inappropriate
18. Exceptions to Certification for Paperwork Reduction Act Submissions
List of Tables
Table A.12-2 Annualized Cost to Respondents
Table A.14-1 Annualized Cost to the Federal Government
Table A.16-1 Project Time Schedule
Table A.16-2 Univariate Analysis
Table A.16-3 Bivariate Analysis
List of Attachments
Attachment 1: Authorizing Legislation
Attachment 2: 60-Day Federal Register Notice
Attachment 3: Notification of Exemption form CDC IRB Review
Attachment 4: Data Collection Instrument
Attachment 5: Survey Cover Letters
Attachment 6: Signature Postcard
1. Circumstances Making the Collection of Information Necessary
The Centers for Disease Control and Prevention (CDC) requests approval by the Office of Management and Budget (OMB) for a one time collection of information to assess primary care physicians’ attitudes and practices regarding prostate cancer screening. The survey will be conducted with a nationally representative sample of primary care physicians. The data collection for which approval is sought is in accordance with CDC's mission to conduct, support, and promote efforts to prevent cancer and to increase early detection of cancer, authorized by Section 301 of the Public Health Service Act [42 USA 241] (Attachment 1). Moreover, data collection is central to the prevention research agenda of the Division of Cancer Prevention and Control, CDC.
Prostate cancer is the most commonly diagnosed cancer among men in the United States. In 2006, it was estimated that approximately 234,460 new cases of prostate cancer would be diagnosed and 27,350 men would die from the disease—accounting for 10% of cancer-related deaths in men (American Cancer Society, 2006). The risk of developing prostate cancer increases gradually with advancing age, with more than 70% of all cancers diagnosed after age 65 (Jemal, et al, 2005). Furthermore, prostate cancer is more likely to be diagnosed among African American men and among men with a family history of the disease (American Cancer Society, 2004).
The prostate specific antigen (PSA) test and digital rectal examination (DRE) are typically used together to screen for prostate cancer. The PSA test is a blood test that measures a protein produced by the prostate gland. The conventional cut-point of 4.0 ng/ml detects a large majority of prostate cancers; however, a significant percentage of early prostate cancers (10-20 percent) will be missed by PSA testing alone. The DRE can oftentimes help to find cancers in men with normal PSA levels. It is important to perform both the PSA test and the DRE together. Compared to PSA, DRE is less effective in detecting prostate cancer, however, it has been shown to be useful in finding cancers in men with normal PSA levels and has been recommended as a test for early detection (Mettlin et al, 1993). National data indicate that prostate cancer screening rates in the United States are high, with between 34 and 75% of males 40 years of age and older reporting ever having had a PSA test (Sirovich, et al, 2003).
There is considerable debate among physicians regarding whether or not to screen men for prostate cancer. This is due in part to the lack of evidence showing a significant survival benefit associated with screening (Mistry & Cable, 2003; McDougall et al, 2000), the risks and complications often associated with screening and treatment (Boyden, 1996), and the low sensitivity of the tests (Thompson, et al, 2003). As a result, the use of PSA and DRE screening tests are viewed as controversial by many primary care physicians (Montaño & Phillips, 1995).
Since prostate cancer in men is different from other cancers, major medical organizations are divided on whether men should be routinely screened for this disease. For example, the American Cancer Society and the American Urological Association recommend that physicians offer screening to their patients. They suggest that men should begin routine screening at age 50 and earlier if they are African American or those with a first degree relative who has been diagnosed with prostate cancer (American Cancer Society, 2005). These groups believe that finding cancer early and treatment may cure it before it may spread to other areas of the body having less chance for cure.
Other medical organizations such as the United States Preventive Services Task Force, the American Academy of Family Physicians, the American College of Physicians, and the American College of Preventive Medicine suggest that since it has not been proven in clinical studies that screening for prostate cancer saves lives, routine screening should either not be done or that the evidence is insufficient to recommend for or against routine screening for prostate cancer using prostate specific antigen (PSA) testing or digital rectal examination (National Guideline Clearinghouse, 2003). This latter group feels that the tests may do more harm than good with its potential side effects and other quality of life issues. All of the above organizations recommend discussion with patients about the benefits and limitations of screening so that the patient can make an informed decision about PSA testing. Through informed desion-making (IDM), the patient obtains all of the information about prostate cancer including risk for the disease and the associated screening tests including their potential benefits and limitations.
In limited qualitative research related to physician discussions with patients, findings show that physicians vary in both whether and to what extent they discuss the advantages and disadvantages of PSA testing (Cooper, et al, 2004). Physicians should be knowledgeable in order to offer their patients information on their personal risk, and the potential benefits and limitations of prostate cancer testing. With varying degrees of specialization, physicians, especially primary care physicians, should be informed of race/ethnicity and other studies and current data related to prostate cancer so that they can communicate this information to their patients. Providers should be able to provide knowledge about prostate cancer to give the best critical information to those patients at risk for the disease.
In one study, about one-third of physicians said that they did not discuss the PSA screening tests with their patients (Dunn, et al, 2001). The top five reasons that these physicians gave for not discussing the tests were lack of time, the complexity of the topic, language barriers between physician and patient, the belief that such a discussion would not influence ordering the test, and physician personal lack of knowledge about the benefits and risks of PSA screening.
2. Purpose and Use of Information Collection
To date, no comprehensive national survey has been conducted to assess primary care physician attitudes and practices related to prostate cancer screening. A recent formative evaluation (qualitative focus groups) of physician practices regarding prostate cancer screening found that most physicians routinely recommend screening, however, many physicians do not discuss the risks and benefits of screening with their patients (Cooper et al, 2004). Few studies have examined the relationship between physician characteristics and screening for prostate cancer, but the research has been limited in scope (Moran, et al, 2000; Edlefsen, et al, 1999; Fowler, et al, 1998). Information gathered from a national survey will allow CDC, other researchers, and clinicians to: (1) examine how PSA testing, DRE, and prostate cancer screening follow-up are conducted in community practice across the United States, and (2) provide a valuable knowledge base to guide the development and implementation of interventions to improve primary care physician adherence to established prostate cancer screening guidelines in the United States.
The objective of the proposed study is to conduct a national survey of primary care physicians to examine physician attitudes and behaviors related to prostate cancer screening. The survey will be administered to a random sample of primary care physicians; thereby (a) providing national estimates of physician use of PSA screening, (b) examining beliefs regarding the efficacy of prostate cancer screening and treatment, and (c) assessing the situations in which physicians recommend screening using PSA and/or DRE as well as which guidelines they follow. CDC will use the results from this survey to examine the demographic, behavioral, attitudinal, and situational characteristics that influence physician’s screening behaviors. Specifically, the survey will provide information about primary care physicians’:
Attitudes and perceptions about the benefits and risks of screening; attitudes toward prostate cancer screening guidelines (e.g., United States Prevention Services Task Force (USPSTF) guidelines); opinions about PSA testing; attitudes toward screening elderly men and men with co-morbidities; and contraindications to screening (e.g., patient characteristics or health status).
Practices regarding prostate cancer screening methods; criteria for ordering screening tests; use of and adherence to clinical guidelines for prostate cancer screening; referral patterns; type and frequency of screening tests ordered; barriers and facilitators to patient and practice-based screening; practices regarding abnormal results; use of informed or shared decision making in practice; use of patient education tools during physician-patient discussions; and counseling and educational messages for patients with abnormal PSA test results. The survey also allows the evaluation of physician-patient discussions regarding prostate cancer by examining a) the extent and nature of physician-patient discussions related to prostate cancer and prostate cancer screening, b) the circumstance and context of the discussion; c) the frequency of the discussion; and d) the patients’ role and level of participation in the discussion.
The survey will be conducted with a nationally representative sample of primary care physicians in selected specialties who are likely to have a patient population of males aged 40 and older. The sample will be stratified by race in order to allow race comparisons on attitudes and practices. Weighting the survey participants by probability of selection will allow us to describe and generalize findings to the primary care population. Thus the approach to this study is primarily descriptive.
Without this study, CDC would have limited knowledge of the variables that influence primary care physician practices regarding prostate cancer. It would be difficult for CDC to advise and inform states, other organizations that develop clinical training materials, physician decision support tools (such as informed decision making), and materials physicians use to manage, counsel and educate patients regarding prostate cancer and PSA screening.
Survey findings will provide national estimates of physician use of clinical guidelines regarding PSA screening, beliefs regarding the efficacy of prostate cancer screening and treatment, and describe situations in which physicians recommend screening using PSA and/or DRE. Survey findings will identify where attitudes and practices are most inconsistent with the most current scientific information, guidelines, and policies as well as common misinformation and misperceptions about prostate cancer and prostate cancer screening. Finally, conducting the nationally representative survey will allow the CDC to develop evidence-based materials likely to be more effective in supporting optimal clinical practice and helping providers to counsel patients regarding PSA testing.
3. Use of Information Technology and Burden Reduction
Data collection will involve a mailed survey to physicians. Electronic or web-based completion of the survey was considered, but was dismissed for the following reason. First, although the National Ambulatory Medical Care Survey (NAMCS) obtained a high response rate using a web-based option, that survey is not comparable to the proposed survey. Physicians, aided by office staff, must first agree to participate in NAMCS and receive training in data collection. NAMCS then requires the clinician to review patient records over a week and record information on patient record forms. This information is conducive to web-based entry of the patient data because it can be entered all at once, it does not involve measurement of physician opinions, and it can be done by office staff.
In contrast, the proposed survey takes 30 minutes for the physician to provide opinions and attitudes, which can be done in multiple short time periods (i.e., pick up, put down, pick up later mode), and so is more conducive to print format.
Montaño, et. al. (1998) found that most physicians prefer a mailed survey, which they can put down when they are busy and pick up again to resume at convenient times (e.g., between patients). Physicians in a busy practice cannot complete the survey in this manner with a web-based survey or with a computer assisted telephone interview (CATI). Although a web site could be designed for physicians to log in repeatedly to sequentially complete the survey, the process of accessing a computer and logging in multiple times is far more difficult and time consuming than picking up and putting down a print survey. Second, although many physicians have some access to the web, many practices do not provide physicians with ready access to individual computers or to the internet. Web-based participation may result in use of a shared computer (e.g., nursing station), which would not be conducive to confidentiality. Thus, web-based completion may require completing the survey after hours or at home, increasing respondent burden.
Rural physicians that do not have access to high-speed web access such as DSL or cable may find response times longer and this may introduce response bias. Third, some physicians lack confidence in the confidentiality of computer-based surveys given recent increases in computer “hacking,” viruses, and use of identifiable computer information such as “cookies.” Older physicians and those who typically defer computer functions to office staff may be less comfortable with computer based technology. Physicians who lack comfort in computer-based technology may anticipate a longer response time. This may reduce response rate, or for persons who proceed with the survey, may increase response burden. Finally, a mixed mode survey (print and web-based) is not recommended for logistic and statistical response reasons. Therefore, we propose to use a printed survey delivered via express mail. Federal Express is our preferred express mail provider.
The survey will be sent by Federal Express since this usually is given directly to the physician rather than being filtered by office staff, and can result in over 80% response (Kasprzyk, et al, 2001). The initial questionnaire mailing will be followed by a reminder postcard after one week, a second mailing to non-respondents at four weeks, a third mailing at week seven, and a fourth and final mailing at week ten. A study-specific computerized tracking and reporting system has been designed to monitor all phases of the study. The database will hold all respondent information and track the study’s progress through all phases. The data management system will track the mailing dates for the questionnaires and postcards, and flags will be set to initiate follow-up mailings and reminder postcards. The receipt of a completed questionnaire, or a refusal, will be logged into this computerized control system. From this system, electronic progress reports will be generated on a weekly basis. This system will reduce respondent burden by ensuring physicians are contacted at appropriate time points and are not sent mailings too many times. In addition, this system will track respondents to ensure that those who have responded are not contacted with reminders. Bar codes containing participant ID numbers will be printed on surveys and signature postcards. Reading of these barcodes upon receipt of signature postcards and surveys will be used to record participants’ final dispositions (complete, ineligible, letter undeliverable, refusal, etc).
Finally, we have taken particular care to design the survey instruments to collect only the minimum information necessary to achieve the goals of the project. This was accomplished by consulting with CDC researchers to identify the most important factors to measure with respect to the project goals and to design questions to measure those factors. Practicing physicians also provided advice about the questions that are most relevant to them and questions that could be deleted from earlier drafts of the survey.
4. Efforts to Identify Duplication and Use of Similar Information
As part of the design of this survey, a literature review was conducted to identify studies of prostate cancer screening attitudes, opinions, and practices among physicians in the US. In addition, CDC staff who are responsible for this activity are knowledgeable of other related surveys by means of systematic searches of the medical and psychological literature, consultation with prostate cancer experts in the US, attendance at professional meetings, national conferences, and informal contacts with staff at other agencies.
A formative evaluation of physician prostate screening practices, which involved conducting 14 telephone focus group discussions with 75 primary care providers practicing in 35 states, was conducted in 2001 (Cooper, et al, 2004). The purpose of the study was to better understand physician behavior related to prostate cancer screening practices, the factors affecting these practices, the feasibility of using prostate cancer educational materials in clinical practice, physicians’ familiarity with clinical guidelines on PSA screening, and to provide background for a physicians’ survey (Cooper, et al, 2004). However, the findings of this research are limited. Most of the physicians who participated in the telephone focus groups were male (79%), white (84%) and practiced in urban areas with populations of a 1 million or more. Thus, this study did not adequately capture the screening practices of minority and rural physicians. Furthermore, the data was collected on a small sample of physicians (N=75) using a semi-structured discussion guide that encompassed only 3 topic areas.
A second formative evaluation explored prostate cancer screening and counseling practices among African American physicians. CDC researchers conducted eight telephone focus groups among a sample of 41 African American primary care physicians representing 22 states. Results from this study indicate that there may be differences in screening practices, factors that influence screening, and patient recommendations among African American physicians compared to non-African American physicians, particularly for African American patients. However, because this study used only African American physicians in its sample, no direct comparison of screening practices or the factors that influence screening among African American and non-African American physicians could be made. Furthermore, neither study examined physician attitudes or beliefs related to prostate cancer screening.
A few national and regional physician surveys of prostate cancer screening have been conducted in recent years. However, these studies focused on: (a) treatment recommendations and management (Kim, et al, 2002; Fowler, et al, 1998), (b) physicians’ opinions or adherence to screening guidelines (Tudiver, et al, 2002) or (c) countries outside of the United States (Durham et al., 2003; Bunting, et al, 1999). Physician surveys conducted within the United States have surveyed non-primary care physicians (McNaughton-Collins, et al, 2002, 2000; Fowler, et al, 1998) or have been limited to only measuring screening rates (Lawson, et al, 1998). Furthermore, no survey has examined primary care physicians’ practices regarding informed-decision making. Based on this current information, it was concluded that no similar data collection effort has been conducted or is currently being conducted.
5. Impact on Small Businesses or Other Small Entities
The information being requested or required has been held to the minimum required for the intended use.
6. Consequences of Collecting the Information Less Frequently
This request is for a one time study. The data are needed to inform CDC initiatives and recommendations for prostate cancer prevention and control. This information is essential to guide future CDC prostate cancer prevention efforts. Without this study, CDC would have difficulty keeping states other organizations up to date in the area of physician practices regarding prostate cancer screening since CDC has developed a number of prostate cancer training materials and physician decision support tools that physicians use for and with their patients.
There are no legal obstacles to reduce the burden.
7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
This project fully complies with all guidelines of 5 CFR 1320.5.
8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
A. A notice of this data collection was published in the Federal Register, Volume 71, No. 180, page 54659, on September 18, 2006, as provided in Attachment 2. No comments have been received in response to that notice.
B. The study protocol including the survey instruments, sampling plan, and data collection procedures were designed in collaboration with researchers at Battelle, Centers for Public Health Research and Evaluation through Contract No. 200-2002-00573, Task Order No. 0004. Battelle researchers worked with formative data derived from interviews and focus groups of primary care physicians conducted by CDC to draft the survey instruments.
The CDC also has an internal work group of staff from the Division of Cancer Prevention and Control. This CDC work group provided oversight and guidance to Battelle researchers on issues concerning the design of the survey instrument, the sampling design, and data collection procedures. These include:
Name Titles Affiliation/Year Phone number
Donald Blackman, PhD Epidemiologist CDC/2005 (770)488-3023
Steven Leadbetter, MS Math. Statistician CDC/2005 (770)488-4143
Louie Ross, PhD Behavioral Scientist CDC/2003-06 (770)488-3087
Lisa Richardson, MD, MPH Medical Officer CDC/2005-06 (770)488-4351
Ingrid Hall, PhD, MPH Epidemiologist CDC/2005-06 (770)488-3035
The survey was pre-tested by five primary care physicians. These pretests were conducted sequentially to revise and improve the instruments with each pretest. The five primary care physicians participated in confidential pre-testing of the draft instrument. They provided advice regarding content, relevance, and clarity of the survey and instructions. They also provided estimates of the time it takes to complete the survey. The instrument was modified based on their comments.
There were no major problems that could not be resolved.
9. Explanation of Any Payment or Gift to Respondents
Obtaining high survey response rates is particularly difficult for busy professionals like physicians. However, there is clear and consistent evidence that monetary incentives significantly increase response rates in most surveys, and experts on survey methods such as Kasprzyk, et al (2001) and Dillman (2000; 1978) recommend their use.
Several studies specifically designed to test the effects of incentives on physician survey response rates have confirmed the importance of monetary incentives. One study by Everett, et al (1997) found that response rates were 18% higher among physicians receiving incentives (63% vs. 45%). Another study by Tambor et al (1993) found significantly more physicians responded when a $25 incentive was provided compared with a no incentive control group (62.0% vs. 18.3%). A third study by Berk, et. al. (1993) divided physicians into three groups: Group 1 received a monetary incentive on the initial mailing, Group 2 received a monetary incentive on a second mailing to non-responders, and Group 3 received no incentive. Response rates for the 3 groups were 63%, 50%, and 40%, respectively. Gunn and Rhodes (1981) and Weber, et al (1982) tested incentives of $0, $25 and $50, and found increased physician response rates for higher incentives. Similarly, Kasprzyk, et al (2001) tested incentives of $0, $15 and $25 and found increased response with higher incentives (27%, 75% and 81% respectively).
The above studies clearly indicate that physician incentives should be used to maximize the response rate to the survey. Berry and Kanouse (1987) also tested the timing of incentive payments and found that a $20 payment with the survey mailing resulted in significantly higher response (78%) than a promise of payment after return of the completed survey (66%). Also, a recent study of a large sample of physicians found a lower response rate among the promised-incentive group (56%) and a higher response rate (71.5%) among the up-front-incentive group (Denevo, et al, 2004). Clearly it is best to provide incentives at the time the survey is sent rather than upon return of the completed survey. Therefore, the monetary incentives for this study will be included in the initial survey packet sent to physicians.
An incentive amount of $40 will be provided to physicians. This amount was selected for two reasons. First, studies have found that as the amount of incentive increases, response rates also increase. Too small an amount is easy to ignore and may be insignificant particularly to physicians whose salaries tend to be higher than many other professionals who are surveyed. A $40 incentive is not likely to be ignored by a physician. Smaller amounts may be sufficient for a short survey, while a larger amount is necessary to obtain high response on longer surveys. Montaño, et al, (2003) provided $50 incentives and obtained 81% response on a 43-page survey of physicians and non-physician clinicians of STD risk assessment and prevention practices. Second, this amount is near the optimal amount used by studies that have found positive associations between incentive amount and response rate. Although two studies found greater response when doubling the amount of the incentive (Weber, et al, 1982; Gunn and Rhodes, 1981), there is evidence that response rates drop when the incentive approaches the cost of replacing the respondent’s revenue in their practice (i.e., what their time is worth to the practice).
In sum, the above cited studies of respondents from the same population we propose, clearly support provision of incentives to be sent with the survey. The selected incentive amount of $40 is large enough to not be ignored by the physician. It will also avoid being viewed as reimbursement for the physician’s time since it is far enough below the typical amount that the physician’s time is worth to the practice ($150 to $250 per hour; Guglielmo, 2003). Thus, an optimal response rate from physicians is expected using this level of incentive.
10. Assurance of Confidentiality Provided to Respondents
The CDC Privacy Act Coordinator has reviewed this study and has determined that the Privacy Act is not applicable. Although surveys will be mailed to named respondents, respondent names will not be collected on the completed survey forms. Response data will be retrieved by a unique identification code assigned to each respondent.
The data collection contractor, Battelle Centers for Public Health Research and Evaluation, will assign a unique identification code to each potential respondent and will maintain a tracking file that links respondent ID codes to respondent names. The tracking file is the only place where ID numbers will be linked with respondent names, and will only be used to track survey mailing status and to facilitate follow-up reminders. The Tracking file will be stored separately from survey response data and staff responsible for tracking will be different from those who work with the response data (i.e., coders, keyers, analysts). Once data quality assurance measures (e.g. checking that all mailings have been sent and accounted for, checking for coding errors) are completed, the tracking file information that would allow linking of individuals to their survey response data will be destroyed.
To further protect the erspondents’ identy and to ensure that response data are not linked to names, the signature postcard will be returned to the contractor separately from the envelope containing the completed survey. The postcard will have a mini-label on it with the participant’s survey ID enabling the contractor to enter into the tracking system the results shown on the postcard. Since the returned postcards will have signatures, they will be stored in locked file cabinets separately from the completed surveys. These file cabinets will not be accessible to staff responsible for data analysis or report writing. To further prevent the possibility of linking signature postcards with surveys, the postcards will not have ID numbers printed on them but will have the ID encrypted on the barcodes which can only be read by a barcode reader attached to a tracking computer.
CDC will receive a de-identified file of response data. All results will be reported in an aggregate manner. Hard copies of surveys will be kept in locked file cabinets when not being edited or keyed. Prior to filing and to being sent to data keying, each survey will be carefully checked for any identifying information. If any identifying information is found it will be removed or blacked out from the survey. Data files will be backed up and only accessed by Battelle employees. The Tracking file used for tracking survey mailings and dispositions is the only place where ID numbers will be linked with physician names. This Tracking database will be stored separately from survey data. Staff responsible for tracking will be different from those who work with the survey data (i.e., coders, keyers, analysts). Once data quality assurance measures (e.g. checking that all mailings have been sent and accounted for, checking for coding errors) are completed, the tracking file information that would allow linking of individuals to their survey data will be destroyed. Statements describing procedures to maintain respondent privacy are included on the survey instrument Introduction page (see Attachment 4).
The study protocol was submitted for CDC IRB review and was determined to be exempt from the requirement for IRB approval (Attachment 3).
11. Justification for Sensitive Questions
The survey instrument with introductory information on the cover page is found in Attachment 4. The cover letters are found in Attachment 5. Although race and ethnicity data will be collected, there are no other personal questions on this survey that are generally considered to be personally sensitive, such as sexual behavior, religious beliefs, or alcohol or drug use. Some questions relating to physicians’ professional practices are potentially sensitive, in that some physicians could feel anxious about being asked about their attitudes and practices, particularly if they are inconsistent with the most recently released medical information, national clinical practice guidelines, or approved screening test indications. These questions, however, are essential to the purposes of the data collection. In addition, it has been shown that most physicians view national clinical practice guidelines as recommendations and do not view them as mandated practice standards (Cabana, et al, 1999). This was also confirmed by physicians in the qualitative and pretest phases used to design the survey. To reduce potential anxiety about acknowledging practice inconsistent with national guidelines, respondents are reminded on the survey cover page that CDC is seeking information on a variety of practice styles. These issues are addressed in the cover letter (Attachment 5) that will accompany the survey and the survey instrument Introduction (Attachment 4).
12. Estimates of Annualized Burden Hours and Costs
Estimated burden and cost to respondents and government are based on the estimated time it will take for eligible responding physicians to complete the survey.
A total of 3,000 physicians (1,200 African-American physicians and 1,800 physicians who are not African-American) will be sent the survey. Based on experience with previous clinician surveys, we have made the following estimates of undeliverable, eligibility and response rate (St. Lawrence, et al, 2002; Montaño, et al, 2003; Irwin, et al, 2006). We expect 120 (4%) of the sample mailing to be undeliverable. Of the 2,880 surveys delivered, 384 (13%) of the sample are expected to be ineligible, deceased, or moved. In these cases, packets are returned with the reimbursement included. Responses indicating this status are expected to be made by the office manager. Of the remaining 2,496 physicians, we expect 80 percent to complete the survey. Of the 20 percent on non-respondents, some will return the packet and state that they do not wish to participate. Others will not return anything (including the reimbursement). Thus it is expected that 2,000 completed surveys will be returned. Of these, it is expected that 1,180 family practice/general practice physicians and 820 general internists will complete the instrument (See Section B.1).
Table A.12-1 lists the numbers of respondents and the burden for primary care physicians to complete the instrument. The instrument pre-test indicated average times of 30 minutes for eligible physicians to complete the survey. Thus, the total burden for the survey and for screening is expected to be 1,032.5 hours.
Table A. 12-1 Estimates of Annualized Burden Hours and Costs
Type of Respondents |
Form Name |
Number of Respondents |
Number of Responses per Respondent |
Average Burden per Response (in hours) |
Total Burden (in Hours) |
Primary Care Physicians (eligible) |
Survey of Physicians’ Practices |
2,000 |
1 |
30/60 |
1,000 |
Primary Care Physicians (ineligible) |
Survey of Physicians’ Practices |
390 |
1 |
5/60 |
32.5 |
Total |
1,032.5 |
Table A. 12-2 shows the estimated cost to respondents. Based on an average physician pay rate of $71/hour (Dictionary of Occupational Titles, US Department of Labor, 2006), the estimated cost burden for the 2,000 physician respondents to complete the survey is $73,308. Thus, the total cost burden for the data collection effort is estimated to be $73,308.
Table A. 12-2 Annualized Cost to Respondents
Type of Respondents |
Form Name |
Number of Respondents |
Number of Responses per Respondent |
Average Burden Per Response (in hours) |
Average Hourly Wage Rate |
Average Burden per Response (in hours) |
Primary Care Physicians (eligible) |
Survey of Physicians’ Practices |
2,000 |
1 |
30/60 |
$71 |
$71,000 |
Primary Care Physicians (ineligible) |
Survey of Physicians’ Practices |
390 |
1 |
5/60 |
$71 |
$2,308 |
Total |
$73,308 |
13. Estimates of Other Total Annual Cost Burden to Respondents or Record Keepers
There is no direct cost to respondents.
14. Annualized Cost to the Government
This project has been fully funded by CDC. These costs are the annualized costs since the study will be carried out in 12 months. These total costs include (1) contract costs for Battelle to conduct the survey, clean and analyze the data, and a write report on the results, and (2) the cost of CDC staff to provide oversight to the study. The total contract cost for carrying out the project is $301,457 over a period of 12 months. The CDC oversight costs include personnel costs of Federal employees involved in oversight, analysis, report preparation, and administrative support, estimated at $23,500 (20% of an FTE at GS-13 plus 5% of an FTE at GS-14) over the entire 12-month project period. Thus, the annualized CDC oversight cost is $23,500. There are no travel, equipment, or printing costs for the government. Thus, the total annualized cost to the government is $324,957.
Table A. 14-1 Annualized Cost to the Federal Government
Item |
Annualized Cost |
Contractor |
$301,457 |
Technical Monitor @ 20 percent time |
$ 16,000 |
Co-Technical Monitor @ 5% percent time |
$ 7,500 |
Total |
$324,957 |
15. Explanation for Program Changes or Adjustments
This is a new study.
16. Plans for Tabulation and Publication and Project Time Schedule
The study will proceed in three phases: Survey Data Collection, Data Analysis and Report Writing. The sample will be drawn and preparation for survey data collection will occur during the first 2 months after OMB clearance. The survey data collection will then begin and will take 4 months. Data collection will be completed within 6 months of OMB clearance. Data cleaning will be completed within 8 months of OMB clearance, and data analysis will be completed within 9 months of OMB clearance. Report writing will be ongoing during the data analysis, and a Final Report from the contractor will be completed within 10 months of OMB clearance. Dissemination of results through CDC websites and publications will be carried out in months 11-12 after OMB clearance. Table 4 provides a summary of the study activities and the months following OMB clearance during which they will be performed.
Table A. 16-1 Project Time Schedule
Activity |
Time Period (Months after OMB Clearance) |
Sample preparation |
Months 1-2 |
Main data collection |
Months 3-6 |
Data cleaning and analysis |
Months 7-9 |
Report writing |
Months 9-10 |
Final Report from contractor |
Month 10 |
Disseminate results |
Months 11-12 |
Battelle will submit a draft report on the survey methods and findings to CDC, and will then prepare a Final Report that incorporates CDC comments and recommendations on the draft report. The Final Report will be used by CDC as a basis for publishing the findings from this study in peer-reviewed journals, preparing a summary of the findings for CDC’s Division of Cancer Prevention and Control website that sampled physicians and members of the public can access, and preparing summaries that will be used as the basis for examination of the demographic, behavioral, attitudinal, and situational characteristics that influence physician’s prostate cancer screening practices.
The analysis of the survey data will include univariate, bivariate, and multivariate analyses. Below we first describe the plan for data preparation and weighting, followed by descriptions of the analyses to be conducted.
Data Preparation and Weighting
After the survey data are entered and cleaned, the data file will be prepared for analysis. Case weights will be assigned in order to adjust for disproportionate sampling and for non-response bias. This is necessary in order to obtain unbiased population estimates based on race of the survey measures. The weight assigned to each respondent (the case weight) will be the product of the reciprocal of the physician’s probability of selection (referred to as the disproportionate sampling or base weight) and an adjustment for non-response (described in Section B.2).
It is anticipated that approximately 20 percent of eligible physicians will not respond (see Section B.3). Non-response bias will be addressed through the construction of weighting class adjustments. To construct the weighting classes, physician characteristics that are available for both non-respondents as well as respondents will be identified. The universe files from which the sample will be selected (see Section B.1) contain several physician characteristics (e.g., sex, practice setting), which may be associated with the propensity to respond and can be used to construct the weighting classes. The weighting class adjustment for the ith cell, denoted Ai can be calculated as:
nir = number of responding physicians in the ith cell
The base weight of each responding physician falling in the ith weighting class will be multiplied by Ai to produce the weight used for estimation, denoted by the term Wia. These weights will be applied in all appropriate analyses described below. Non-response weighting adjustment will be applied in all analyses. The disproportionate sampling weight will be applied in all analyses where the two race strata are combined. This is unnecessary in analyses where strata are compared or analyzed separately.
Univariate distributions and descriptive statistics will first be obtained for all variables in the survey. Weighted total and percentage distributions will be generated for categorical variables, and weighted means will be generated for continuous variables.
Categorical variable weighted total and percentage distributions will be generated as follows. If Xi denotes the reported value for the ith responding physician for the characteristic X, then a weighted total can be expressed as:
Xi = 1 if the ith responding physician possesses the characteristic of interest
= 0 otherwise
Using these weighting methods, univariate distributions and summary statistics will first be generated to describe physician characteristics and their practice and patient characteristics, measured in Sections I and II of the survey instrument. This is an essential first step in describing the sample and generalizing the findings to the respondent universe.
Univariate analyses using the above estimation procedures will be conducted on items in the remaining sections of the questionnaire in order to describe physicians’ prostate cancer screening practices, screening efficacy and beliefs, social influences and social support, physician perceptions and behaviors, and patient scenarios (Physician Survey, Section III).
Table 5 presents an example of a table that will be produced to report percentage of physicians who provide prostate specific antigen testing for asyptomatic patients as part of their HME (Physician Survey, Section III). Similar tables will be produced for all sections of the questionnaire.
Table A. 16-2 Univariate Analysis
Clinical Action |
Yes (95% CI) |
No (95% CI) |
Provide prostate specific antigen (PSA) testing for asyptomatic patients as part of their HME |
|
|
Bivariate Analyses
Bivariate analyses will next be conducted to: 1) obtain physician subgroup percentages or means on survey measures, 2) test for subgroup differences on those measures, and 3) test for associations between physician characteristics and practice measures. In planning and conducting these analyses, physician (e.g., specialty, sex, race) and practice (e.g., type of practice, patient volume) characteristics measured in Sections A-B of the survey can be referred to as independent variables. Prostate cancer screening practices, screening efficacy and beliefs, social influences and social support and physician perceptions and behaviors can be referred to as dependent variables. In addition, we will conduct separate bivariate analyses within each of the race strata (e.g., specialty by screening practices).
When both the independent and dependent measures are continuous, correlation analysis will be used. When the independent measure is nominal, the dependent variable distributions for each group will be displayed in a cross-classification table. For example, it is important to describe and compare the race groups on their providing DRE and/or PSA testing for asymptomatic patients as part of their HME, providing DRE and/or PSA testing for asymptomatic patients with a family history of prostate cancer, the interval they screen for prostate cancer in asymptomatic patients, and the age at which they begin discussing prostate cancer screening with asymptomatic patients. We will compare the two race groups for all of these analyses.
Table 6 presents a shell table example showing a cross-classification of physician race by physicians’ ratings of frequency of ordering a repeat PSA test when the PSA is higher than the expected normal range. Similar tables will be produced for other physician and practice characteristics (independent variables) crossed by physician ratings of their prostate cancer screening practices, prostate cancer screening efficacy and beliefs about providing DRE and/or PSA tests, discussing prostate cancer screening, and opinions about screening guidelines and organizational recommendations (dependent variables). It is of particular interest to determine whether response distributions for these dependent measures are similar or dissimilar for the various physician and practice characteristics. For nominal or ordinal dependent variables, chi-square tests will be initially applied to make this determination. Similarly, we can do comparisons by specialty.
Table A. 16-3 Bivariate Analysis
|
Frequency of Ordering a Repeat PSA Test |
||||
Specialty |
Never |
Sometimes |
Half the time |
Usually |
Always |
African American Physicians |
|
|
|
|
|
Non-African American Physicians |
|
|
|
|
|
In this tabled example, the chi-square test will determine whether the frequency with which physicians who order a repeat PSA test when the PSA is higher than the expected normal range is independent of physician race. The CROSSTABS procedure in the SAS statistical software package provides two options for performing chi-square tests. The first option is based on the observed minus the expected cell values. The second option is based on the test for no interaction in a log-linear model. We will probably use the first option, as the second option is not recommended if cell estimates of zero are expected. If the chi-square test in such analyses indicates that the response distributions are not homogeneous across the levels of physician race, then alternative categorical models, which take advantage of the ordinality of many of the response categories, will be considered. One such model is the row-effects model. If unit-spaced scores are assigned to the frequency (column) categories, then this model can be expressed as:
log (mi, j+1/mij) = aj+ui
where log(mi,j+1 /mij ) is referred to as the logit for the jth and (j+1)th columns within the ith row, ui is the row effect of making response j+1 instead of j, and aj is the difference in the column variable for the jth and (j+1)th columns. If the logits for adjacent columns for each physician specialty category are plotted, the row effects model postulates that the plots for the various combinations of adjacent categories should be parallel.
Since the action frequency categories as well as many other measures in the survey can be considered to be ordinal or interval, parametric statistical analysis can be used. If the distribution is not highly skewed, a weighted mean for each physician race grouping can be calculated in the example above. Determining whether the weighted means are significantly different will be accomplished using one-way analysis of variance. This approach assumes that the weighted means are normally distributed, and will therefore not be appropriate if there is clumping of the responses in one or two of the response categories. If the dependent measure is not normally distributed, the chi-square analysis above will be relied upon.
It is also important to assess the strength of association between physicians’ attitudes and beliefs about discussing prostate cancer screening with patients (measured in Screening Efficacy and Beliefs, Section III), and the percentage of physicians’ who provide prostate cancer screening (measured in Prostate Cancer Screening Practices, Section III). As described above, crosstabulations and chi-square analyses will be conducted, or correlations will be computed where appropriate. Additionally, the beliefs about discussing prostate cancer screening with patients (in Screening Efficacy and Beliefs, Section III) will be assessed for internal consistency, and the items that contribute to high internal consistency will be summed to compute an attitude measure for each action. Correlation analyses will be used to assess the strength of association of attitude scores with physicians’ reports of their providing each action.
Multivariate Analyses
Finally, multivariate analyses will be conducted to assess the effects of multiple variables on physician attitudes and practices related to providing prostate cancer screening, and discussions and clinical actions with asymptomatic patients. Logistic regression will be used if the dependent variable is binary, and ordinary least squares regression will be used when the dependent variable is continuous. When the dependent variable is nominal, we will use a generalized logit model. On the other hand, if the dependent variable is ordinal, a proportional odds model may be appropriate. We will consider for predictor variables factors that were associated with physician attitudes and practices in the above bivariate analyses. Forward stepwise regression will be used to allow the strongest significant predictors to enter each equation. The goal of these analyses will be to identify combinations of physician and practice characteristics that best explain physician attitudes and practices. This information will provide guidance about profiles or groups of physicians to target with future prostate cancer control and education efforts.
17. Reason(s) Display of OMB Expiration Date is Inappropriate
There is no approval being requested to not display the expiration date.
18. Exceptions to Certification for Paperwork Reduction Act Submissions
There are no exemptions being requested for this clearance.
File Type | application/msword |
File Title | OMB Package |
Author | Crystal Freeman |
Last Modified By | arp5 |
File Modified | 2007-04-10 |
File Created | 2007-04-09 |