SUPPORTING STATEMENT
PART B
FOR OMB CLEARANCE PACKAGE
Submitted by:
Ingrid J. Hall, Ph.D., MPH
iah9@cdc.gov
(770) 488-3035
Supported by:
Division of Cancer Prevention and Control
National Center for Chronic Disease Prevention and Health Promotion
Centers for Disease Control and Prevention
Atlanta, Georgia
TABLE OF CONTENTS
1. Respondent Universe and Sampling Methods
2. Procedures for the Collection of Information
3. Methods to Maximize Response Rates and Deal with Nonresponse
4. Tests of Procedures or Methods to be Undertaken
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
Bibliography
List of Tables
Table B. 1-1 Estimated Size to Respondent Universe and Proposed Study Sample
List of Attachments
Attachment 1: Authorizing Legislation
Attachment 2: 60-Day Federal Register Notice
Attachment 3: Notification of Exemption form CDC IRB Review
Attachment 4: Data Collection Instrument
Attachment 5: Survey Cover Letters
Attachment 6: Signature Postcard
1. Respondent Universe and Sampling Methods
The population of interest for the survey is non-federal primary care physicians in the 50 states and the District of Columbia, excluding Territories, who are active and in patient care and who have office-based practices. Primary care physicians include the specialties of family medicine, general practice, and general internal medicine. Physicians who are involved in full-time teaching, research, or administration; are retired; or are in training will be not be selected for inclusion in the study. We propose to exclude federal physicians—who comprise only 2.4% of all physicians—because (a) almost all federal physicians are hospital-based; (b) they are disproportionately distributed among US armed forces bases and government hospitals and medical facilities; (c) only about one-fourth are involved in primary care; and (d) they have a unique patient mix (e.g., members of the armed forces, their families, and veterans). Physicians who are not in patient care will be excluded for the obvious reason that they are not involved in prostate cancer screening. Further, we propose excluding hospital-based physicians because nearly two-thirds are residents or senior fellows and only a small proportion of the full-time staff are involved in primary care.
Table B.1-1 lists the sampling frame size, sample size, and expected response rate by physician specialty. We will use disproportionate stratification by race, to include 1,200 African American physicians and 1,800 non-African American physicians. Within these two strata, specialty will be represented proportional to size (See justification in “Survey Sample Selection” in Section B.2).
|
Number in Universe |
Sample Size |
Undeliverable (4%) |
Ineligible (13%) |
Response (80%) |
African-American Physicians (AA) |
|
|
|
|
|
Family Practice/General Practice |
2,187 |
708 |
-- |
-- |
-- |
General Internal Medicine |
2,074 |
492 |
-- |
-- |
-- |
Total number of AA Physicians |
4,261 |
1,200 |
48 |
156 |
797 |
Non-African-American Physicians (NAA) |
|
|
|
|
|
Family Practice/General Practice |
52,869 |
1,062 |
-- |
-- |
-- |
General Internal Medicine |
36,294 |
738 |
-- |
-- |
-- |
Total number of NAA Physicians |
89,163 |
1,800 |
72 |
234 |
1,195 |
Total |
93,424 |
3,000 |
120 |
390 |
1,992 |
The sampling frame for the physician survey will be purchased from Medical Marketing Services (MMS) Inc. MMS maintains a list of physicians derived from the American Medical Association (AMA) Masterfile and a list of osteopathic physicians from the American Osteopathic Association (AOA) Masterfile. The AMA Physician Masterfile is the most comprehensive list of physicians in the United States (including both members and non-members of the AMA). The AMA Physician Masterfile includes all allopathic physicians and approximately 80% of osteopathic physicians. Standardized procedures will be used to remove duplicates from the two lists so that doctors of osteopathy appear only once in the sampling frame. In addition to physician names and addresses, various demographic and practice-related information will be obtained for the selected sample.
2. Procedures for the Collection of Information
This section describes (1) determination of sample size and the power that is expected for statistical tests of hypotheses, and (2) survey collection procedures for physicians.
An important goal of this survey is to obtain accurate point estimates of the various survey measures, for the overall sample as well as for subgroups. This will require sufficient sample size to result in narrow confidence intervals of those estimates. Another important part of the analysis will involve physician specialty comparisons on the survey measures. This goal is more important in the determination of sample size because of the need to have sufficient numbers of each specialty. For this reason, we propose to use disproportionate stratification by selecting equal numbers of physicians of each specialty. In the analysis file, each case will be weighted by its probability of selection.
Based on an anticipated response rate of 80% and the ability to detect differences between subgroups with a precision of 5 percent, the estimated sample size for the study is 2,000 completed surveys (800 African American physicians and 1,200 physicians who are not African American). The sizes of the two samples were obtained by statistical power and cost considerations, in an attempt to maximize the statistical power of the estimates while maintaining a reasonable cost.
We first calculated the power for comparisons in proportions (for example, the proportion of older versus younger African American physicians who indicate that they routinely recommend PSA screening or the proportion of older versus younger non-African American physicians who indicate that they discuss the advantages and disadvantages of PSA screening with their patients prior to the test) if the true proportion in one subgroup is .50, .25, or .125. The power of the comparisons between subgroups decreases as the true proportion increases from .125 to .50. With a sample of 1,200 completed surveys, we can detect a difference of 0.051 when the true proportion is .125, a difference of 0.070 when the true proportion is .25, and a difference of 0.087 when the true proportion is .50, all with a power of .80 (β = .20), and at p ≤ .05 (α = .05). In other words, completed surveys from 1,200 non-African American physicians is large enough to detect a 9 percentage point difference among subgroups—for example, older physicians versus younger physicians who indicate that they discuss the advantages and disadvantages of PSA screening with their patients prior to the test. With a sample of 800 completed surveys from African American physicians, we can detect a difference of 0.061 when the true proportion is .125, a difference of 0.086 when the true proportion is .25, and a difference of 0.106 when the true proportion is .50, again, all with a power of .80 (β = .20), and at p ≤ .05 (α = .05). So, a sample of 800 African American physicians is large enough to detect a 10 percentage point difference among subgroups. When we combine the two samples for a sample size of 2,000 we can detect smaller differences with the same power at the same level of significance.
We also investigated the power of these samples to detect significant odds ratios using multivariate logistic regression. With a sample of 1,200 completed surveys from non-African American physicians, we will be able to detect significant odds ratios of 1.30 when the probability of the event (e.g., the probability of discussing advantages and disadvantages of screening) is .125, odds ratios of 1.225 when the probability of the event is .25, and odds ratios of 1.175 when the probability of the event is .50, assuming a multiple correlation coefficient of .3 between explanatory variables, power of .80 and α = .05. In comparison, the odds ratios that we can detect with a sample of 800 completed surveys from African American physicians are 1.375, 1.275, 1.225 at base rates of .125, .25, and .50 respectively, again assuming a multiple correlation coefficient of .3, power of .80 and α = .05. So, using the proposed sample sizes, we have sufficient power to detect significant odds ratios of approximately 1.4 comparing, for example, older versus younger African American physicians who counsel their patients about the advantages and disadvantages of screening controlling for sex, practice type, and practice location.
The sample size will be further adjusted based on estimated response and eligibility among those selected. Based on previous experience, 4% of the surveys will be undeliverable, and that 13% of physicians who respond will be ineligible to participate because they are retired, do not see patients, or someone will notify us that they are deceased. Eighty percent of the remainder of the sample is expected to complete the survey. Taking into account these response and eligibility rates, in order to obtain 800 completed surveys from African American physicians and 1,200 completed surveys from non-African American physicians, we will need to select 1,200 African American physicians and 1,800 non-African American physicians for the survey sample. Thus, a total of 3,000 physicians will be randomly selected from the sampling frame.
Data will be collected through a self-administered mail survey. The survey instrument is presented in Attachment 4. Section I contains questions about physician and practice characteristics. It also includes screening questions to determine whether the physician provides at least 8 hours of outpatient care per week and is therefore eligible to complete of the remainder of the survey. Section II contains questions about the characteristics of the physician’s patient panel. Section III asks questions about clinical practice and attitudes. Section III includes subsections on prostate cancer screening practices, screening efficacy and beliefs, social influences and social support, physician perceptions and behaviors and patient scenarios.
Initial survey packets will be sent to physicians via Federal Express. Packets will be sent via priority US mail to physicians with PO Box addresses since Federal Express does not deliver to these addresses. This mode of mailing has been demonstrated to result in higher response rate than first class mail (Kasprzyk, et al, 2001). The packets will include a cover letter, survey, stamped, self-return signature response postcard (with space to indicate a reason for ineligibility), stamped self-return envelope, and $40 cash as compensation for taking time to participate. The cover letter will be printed on CDC letterhead and personalized. The letter will emphasize that the survey seeks physician input in order to help CDC and other organizations develop clinical training materials, decision support tools, and materials to counsel and educate patients. The letter will also contain an 800 number for the recipient to call in case the packet does not reach the intended provider. This number will be located in the Battelle office responsible for the survey mailings. Because it is important to ensure that the physician rather than the office manager, completes the questionnaire, we will ask each participating physician to sign the signature postcard attesting to the fact that he/she completed the survey instrument. The signature postcard is also designed for an ineligible physician to indicate that he/she is no longer practicing or does not see patients who are at risk for prostate cancer. The signature postcard will also provide an easy method for someone opening the package to inform us that the physician is deceased or moved. We have found that when physicians have moved or are ineligible or deceased, such a postcard is more likely to be returned than an entire survey packet. Copies of the survey cover letters (Attachment 5) and signature postcard are included in Attachment 6.
A reminder postcard will be sent via first class mail to all sampled physicians one week after the initial packet mailing. It is expected that the first mailing and reminder postcard will result in return of about 45% of the questionnaires. A copy of the reminder postcard is included in Attachment 6.
A second mailing will be sent via Federal Express to non-respondents three weeks after sending the reminder postcard. The second mailing will include a cover letter reminding the physician that he/she previously received the survey and reiterating the importance of their response. This letter will be printed on CDC letterhead and personalized. It is expected that this will increase the overall return rate to about 65%. A copy to the second reminder letter is included in Attachment 5.
A third mailing will be sent via Federal Express to all non-respondents three weeks after the second mailing. This should encourage another 10% of the original sample to return a completed survey, bringing the return rate to approximately 75%. A copy of the third reminder letter is included in Attachment 5.
A fourth mailing will be sent via Federal Express to all non-respondents three weeks after the third mailing. This should encourage another 5% of the original sample to return a completed survey, bringing the return rate to approximately 80%. A copy of the fourth reminder letter is included in Attachment 5.
3. Methods to Maximize Response Rates and Deal with Nonresponse
Physicians who spend most of their time on direct patient care are a particularly difficult group to survey. These physicians are inundated with mail, faxes, and telephone calls from patients, pharmaceutical companies, sales representatives, researchers, and colleagues. Most physicians’ offices have administrative personnel assigned to sort through these various incoming messages and only pass on to the physician those most in need of his/her direct attention. Consequently, surveys of practicing physicians generally result in lower response rates than surveys of other groups of respondents, including other professionals. Nevertheless, reviews of survey methods clearly point to a number of procedures that improve response rates among physicians and mid-level providers. The proposed plan for data collection incorporates these proven methods.
In the past, collecting data by mail has been shown to be the best approach among a variety of groups. This is particularly true for physicians. Other alternatives, including face-to-face interviews and computer-assisted telephone interviews, have their own advantages and disadvantages, strengths and weaknesses. For example, personal face-to-face interviewing has generally resulted in the highest response rates (between 70-90%) but is also the most expensive type of data collection effort and takes the greatest amount of time to complete. The costs of using this method for this survey would be considered prohibitive. Telephone surveys have traditionally had response rates comparable to face-to-face interviews (between 70-90%) while costing substantially less to conduct. However, telephone interviews must be kept shorter. It is more difficult to keep a respondent's attention while on the telephone than in a face-to-face interview situation. Methods researchers recommend that telephone interviews be kept to 20 minutes for an optimal response rate. Response rates for telephone interviews have traditionally been high because telephone norms in our society generally do not condone hanging up on a caller (Dillman, 1978). However, there is evidence that telephone norms and practices are changing. Survey operations researchers find that they are spending more time screening for valid telephone numbers because of the growth of new telephone numbers due to pagers, modems, and faxes. In addition, many individuals have telephone answering services or voice mail, allowing them to screen out unwanted calls. With new telephone norms and multiple unusable numbers, telephone data collection is becoming less efficient and more costly. The cost and effort of contacting physicians and scheduling a personal or telephone interview would be very high.
Mailed surveys are the cheapest form of data collection, but researchers have usually had to contend with much lower response rates; approximately 20-40 percentage points lower with one mailing and no follow-up compared to one mailing with additional contacts (Dillman, 2000). The disadvantage of mail surveys is that the decision of whether to participate is under the complete control of study respondents. The length of the survey has been shown to affect this decision. The optimal length for a self-administered mail survey, without negatively affecting response rates, is about 10-12 pages, or about 125 close-ended items on a questionnaire (Dillman, 1978). For the same response time burden, one can ask more questions with a self-administered mail survey than in a telephone interview, thus allowing self-administered questionnaires to be longer than telephone interviews, although not as long as in-person interviews. Research has shown that self-administered mail surveys can be longer if the topic is of high interest or importance to respondents.
To overcome the low response rates typically encountered with mail surveys, Dillman (1978) proposed a mailed survey methodology that was based on social exchange theory. His method, called the Total Design Method (TDM), has been shown to increase response rates among mail survey respondents to as high as 77%, comparable to telephone and in-person response rates (Dillman, 2000). The Total Design Method described by Dillman in 1978, now called the Tailored Design Method, consists of a number of suggested steps to improve survey response rates. The basis for TDM is that researchers can encourage higher response rates through the use of social exchange theory by rewarding respondents through non-monetary or monetary means, reducing perceived costs to respondents by reducing effort, and establishing trust through treating the respondent as a partner in the process. Dillman recommended that, in operationalizing these factors based on social exchange theory, researchers must pay attention to the details of contact with respondents, wording of letters, incentives related to completion, length of questionnaires, mailings, and follow-up with study participants (Dillman, 2000).
Multiple methods studies, reviews and meta-analyses have been conducted to determine which factors lead to an increase in response rates in mail surveys. Generally, studies show that preliminary notification, multiple follow-ups with respondents, monetary and non-monetary incentives, use of first class stamped envelopes and appropriate salutations have positive effects on response rates among physicians (Baron, De Wals and Milord, 2001; Kasprzyk, et al, 2001; Collins, et al, 2000; Dillman, 2000, 1978; McLaren and Shelley, 2000). Other variables, such as sponsorship or endorsement, use of personalization techniques in mailings, and length of questionnaires, have shown inconsistent effects on response rates (Dillman, 2000, 1978; Maheux, Legault, and Lambert, 1989; Mullen, et al, 1987). Yammarino, et al (1991) and Fox, et al (1988) conducted meta-analyses of the published survey methods literature, comparing all the factors in these studies. Studies reviewed using experimental or quasi-experimental study designs manipulated a wide variety of factors [17 in Yammarino, et al (1991); 9 in Fox, et al (1988)] thought to be related to survey response rates. The meta-analyses conducted were multi-factorial allowing all variables of interest to be compared with each other for effects on response rates. These researchers concluded that preliminary notification, follow-up, return envelope and postage, and monetary incentives were effective in increasing response rates. Yammarino, et al (1991) showed that these factors increased response rates by 28.5%, 30.6%, 18.4% and 2.4% respectively. Fox et al (1988) found that sponsorship of surveys by organizations increased response rates, but this was not found in the Yammarino, et al (1991) meta-analysis.
Previous surveys of physicians have examined the effect of endorsements, reminders, type of survey, and incentives on response rate. Overall, these studies had response rates that ranged from 11% to 92% with a mean physician response rate of 52%. The higher response rates were obtained with special populations such as graduates of certain programs or members of specific organizations. In general, most of these studies did not follow the procedures recommended by Dillman (2000, 1978) in his Total Design Method, or procedures shown by survey methods researchers to be effective in increasing response rates.
This study will use Dillman’s techniques to maximize physician response rates. The methods proposed for this study will include multiple follow-up procedures by mail after the initial survey has been sent, inclusion of stamped return envelopes, and monetary incentives to participate, based on Dillman’s Tailored Design Method (2000) and a thorough review of survey methods research described above. This plan represents the best approach to balancing the need to control costs with the desire to achieve high response rates. The methods proposed for this study have been highly successful in achieving 70% response to a national survey of physicians (St. Lawrence, et al, 2002), 80% response to a Washington State survey of primary care clinicians (Montaño, et al, 2003), and 82% response to a mailed survey to 743 primary care clinicians in two large health plans (Irwin, et al, 2002).
A reminder postcard will then follow the initial mailing by one week and will be sent to all sampled physicians. This postcard will thank all respondents who have completed the survey and ask those who have not yet responded to please do so promptly. A phone number will be included in case an initial mailing did not reach the intended physician. A second mailing, similar to the first but with a different cover letter, will then be sent via Federal Express to all non-respondents within three weeks of the first mailing. Similarly, a third and fourth mailing will be used.
To encourage participation, the survey introduction provides an estimate of the time needed to complete the entire survey and notes that some sections may not be relevant to a given respondent, thus reducing the time needed to complete the survey. In addition, the survey is designed in sections. The first section contains screening questions to determine whether the physician spends at least 8 hours per week in outpatient clinical practice. There are clear instructions to return the survey at this point if the physician is ineligible to continue.
4. Tests of Procedures or Methods to be Undertaken
The survey instrument was developed based on qualitative analysis of interviews and focus groups with physicians from each of the selected clinical specialties. The qualitative research and analyses were designed to identify relevant issues surrounding prostate cancer screening attitudes and practice. These results demonstrated that most physicians fell into one of two patterns of PSA screening—routine and non-routine screeners. Routine screeners are described as physicians who begin regularly screening asymptomatic patients with no known risk factors around age 50, while "non-routine screeners" are physicians who did not regularly screen such patients or give recommendations about whether or not to screen. However, non-routine screeners typically discussed the implications of screening with patients before offering the PSA test. The qualitative data revealed that African American physicians routinely recommend prostate cancer screening to their patients and many of them begin screening their African American patients at the age of 40. The qualitative analyses and discussions between Battelle and CDC researchers, resulted in the specification of main concepts to measure and in the development of survey items. The Physician and Practice Characteristics section and Patient Characteristics section were adapted from the STD Contact survey, which received OMB approval (OMB Control number 0920-0431, expiration 6/30/2000) and was conducted in 1999 (St. Lawrence, et al, 2002).
The survey instrument is included in Attachment 4. Multiple phases of survey design, review, and revision were conducted to finalize the survey instrument. The survey instrument was designed first, based on analysis of the physician interviews and focus groups. This draft instrument was reviewed by experts at CDC. The survey was extensively revised based on these reviewers’ comments. The revised survey instrument was reviewed by Dr. William Phillips (a primary care research expert) and was further revised. The instrument was next sent to key individuals inside and outside CDC for review, and their recommendations were used to revise the instrument. The instrument was pre-tested by practicing physicians and revised to improve the instrument. For the pre-test, we recruited 5 primary care physicians (including 2 African American physicians). Pre-test participants were asked to review the survey instrument and to then participate in a 1-hour telephone call to provide their comments and recommendations. Final revisions of the survey instrument were made based on the review and recommendations of these practicing physicians.
Data Collection Procedures
All data collection procedures question formats and response scales to be used in this study have been previously tested by Battelle. These procedures, which have been used to design questionnaires relevant to practicing physicians and to obtain high response rates, have been described in conference presentations including an invited symposium on methods to maximize physicians survey response (Kasprzyk, et al, 2000).
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
Battelle Centers for Public Health Research and Evaluation (CPHRE) staff worked with staff from CDC to design the study protocol and data collection instruments. Daniel Montaño, Ph.D. (206-528-3105) and Danuta Kasprzyk, Ph.D. (206-528-3106) led the Battelle effort to design this protocol and data collection instruments. William Phillips, M.D., MPH (206-528-3126) assisted in the design of the survey instrument. Diane Burkom, M.A. (410-377-5660) assisted in the design of the data collection procedures. Charles Wolters, M.S. (410-377-5660) assisted in the design of the sampling plan. Steve Leadbetter, M.S. (770-488-4143), a statistician in CDC’s Division of Cancer Prevention and Control, assisted in review of the sampling and analysis plans.
Battelle will collect and analyze the data for CDC. The overall data collection and analysis effort will be directed by Drs. Montaño and Kasprzyk. Battelle’s Survey Operations unit will collect the data under the direction of Jeanine Christian. Drs. Montaño and Kasprzyk will analyze the data with assistance from Charles Wolters on sample weighting. Drs. Montaño and Kasprzyk will be responsible for writing the Final Report.
Ingrid Hall, Ph.D., MPH, Division of Cancer Prevention and Control, National Center for Chronic Disease Prevention and Health Promotion, CDC, is the technical monitor who will approve and receive all contract deliverables, and collaborate with data analysis (770-488-3035).
Bibliography
American Cancer Society. (2006). Cancer facts & figures-2006. Atlanta, GA, American Cancer Society.
Baron, G., De Wals, P., & Milord, F. (2001). Cost-effectiveness of a lottery for increasing physician’ responses to a mail survey. Evaluation and the Health Professions,
24(1):47-52.
Berk, M.L., Edwards, W.S., & Gay, N.L. (1993). The use of a prepaid incentive to convert nonresponders on a survey of physicians. Evaluation & The Health Professions, 16(2):239-245.
Berry, S.H. & Kanouse, D.E. (1987). Physician response to a mailed survey:
An experiment in timing of payment. Public Opinion Quarterly, 51:102-116.
Boyden, A. (1996). Prostate cancer screening: what general practitioners and patients need to know. Australian Family Physician, 25(9 Suppl 2):S86-90.
Bunting, P.S., Goel, V., Williams, J.I., & Iscoe, N.A. (1999). Prostate specific antigen testing in Ontario: reasons for testing patients without diagnosed prostate cancer. Canadian Medical Association Journal, 160:70-77.
Cabana, M.D., Rand, C.S., Power, N.R., Wu, A.W., Wilson, M.H., Abboud, P.C., & Rubn, H.R. (1999). Why don’t physicians follow clinical practice guidelines? A framework for improvement. Journal of the American Medical Association, 282(15):1458-1465.
Collins, R.L., Ellickson, P.L., Hays, R.D., & McCaffrey, D.F. (2000). Effects of incentive size and timing on response rates to a follow-up wave of a longitudinal mailed survey. Evaluation Review, 24(4):347-63.
Cooper, C., Merritt T.L., Ross, L.E., John, L.V., & Jorgensen, C.M. (2004). To screen or not to screen, when clinical guidelines disagree: primary care physicians' use of the PSA test. Preventive Medicine, 38:182-191.
Delnevo, C.D., Abatemarco, D.J., & Steinberg, M.B. (2004). Physician response rates to a mail survey by specialty and timing of incentive. American Journal of Preventive Medicine, 26(3):234-6.
Dictionary of Occupational Titles, US Department of Labor, 2006. http://www.bls.gov.oes.current/oes_nat.htm
Dillman, D.A. (2000). Mail and Internet Surveys: The Tailored Design. New York, NY: John Wiley & Sons.
Dillman, D.A. (1978). Mail and Telephone Surveys. New York, NY: John Wiley & Sons.
Dunn, A.S., Shridharani, K.V., Lou, W., Bernstein, J., & Horowitz, C.H. (2001). Physician-patient discussions of controversial cancer screening tests. American Journal of Preventive Medicine, 20:130-134.
Durham, J., Low, M., & McLeod, D. (2003). Screening for prostate cancer: a survey of New Zealand general practitioners. New Zealand Medical Journal, 116(1176):U476.
Edlefsen, K.L., Mandelson, M.T., McIntosh, M.W., Anderson, M.R., Wagner, E.H., & Urban, N. (1999). Prostate-specific antigen for prostate cancer screening? do physician characteristics affect its use? American Journal of Preventive Medicine, 17 :87-90.
Everett, S.A., Price, J.H., Bedell, A.W., & Telljohann, S.K. (1997). The effect of a monetary incentive in increasing the return rate of a survey to family physicians. Evaluation & the Health Professions, 20(2):207-214.
Fowler, F.J., Bin, L., Collins, M.M., Roberts, R.G., Oesterling, J.E., Wasson, J.H., & Barry, M.J. (1998). Prostate cancer screening and beliefs about treatment efficacy: a national survey of primary care physicians and urologists. American Journal of Medicine, 104(526):532.
Fox, R.J., Crask, M.R., & Kim, J. (1988). Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52:467-491.
Greenlee, R.T., Murray, T., Bolden, S., & Wingo, P.A. (2000). Cancer statistics, 2000. CA, A Cancer Journal for Clinicians, 50(1):7-33.
Guglielmo, W. (2003). Physicians' Earnings: Our exclusive survey. Medical Economics, 80:71.
Gunn, W.J & Rhodes, I.N. (1981). Physician response rates to a telephone survey: Effects of monetary incentive level. Public Opinion Quarterly, 45:109-115.
Irwin K, Mantano DE, Kasprzyk D, Carlin L, Freeman C, Barnes R, Jain N, Christian J, et al (2006). Cervical cancer screening, abnormal cytology management, and counseling practices in the United States. Obstetrics and Gynecology, 108 :397-409.
Irwin, K.L., Anderson, L., Stiffman, M., et al. (2002). Leading barriers to STD care in two managed care organizations: Final results of a survey of primary care clinicians. 2002 National STD Prevention Meeting, March 4-7, San Diego, CA Abstract P96.
Kasprzyk, D., Montaño, D.E., Phillips, W.R., & Armstrong, K. (2000). System for Successfully Surveying Health Care Providers. Invited symposium at the American Public Health Association meeting, November 2000, Boston, MA.
Kasprzyk, D., Montaño, D.E., St. Lawrence, J., & Phillips, W.R. (2001). The effects of variations in mode of delivery and monetary incentive on physicians’ responses to a mailed survey assessing STD practice and patterns. Evaluation and Health Professions,
24(1):3-17.
Kim, Y., Roscoe, J.A., & Morrow, G.R. (2002). The effects of information and negative affect on severity of side effects from radiation therapy for prostate cancer. Support Care Cancer, 10(5):416-21.
Lawson D.A., Simoes, E.J., Sharp, D., Murayi, T., Hagan, R., Brownson, R. & Wilkerson, J. (1998). Prostate cancer screening-a physician survey In Missouri. Journal of Community Health, 23, 347-358.
Maheux, B., Legault, C., & Lambert, J. (1989). Increasing response rates in physicians’ mail surveys: An experimental study. American Journal of Public Health, 79:638-639.
McDougall, G.J. Jr, Weber, B.A., Dziuk, T.W., & Heneghan, R. (2000). The controversy of prostate screening. Geriatriatric Nursing, 21(5):245-8.
McLaren, B., & Shelley, J. (2000). Response rates of Victorian general practitioners to a mailed survey on miscarriage: randomised trial of a prize and two forms of introduction to the research. Australian and New Zealand Journal of Public Health, 24(4):360-364.
McNaughton-Collins, M., Barry, M.J., Zietman, A., Albertsen, P.C., Talcott, J.A., Walker-Corkery, E., Elliott, D.B., & Fowler, F.J. Jr. (2002). United States radiation oncologists' and urologists' opinions about screening and treatment of prostate cancer vary by region. Urology, 60(4):628-33.
Mettlin, C., Jones, G., Averette, H., Gusberg, S.B., & Murphy, G.P. (1993). Defining and updating the American Cancer Society guidelines for the cancer-related checkup: Prostate and endometrial cancers. CA, A Cancer Journal for Clinicians, 43:42-46.
Mistry, K., & Cable, G. (2003). Meta-analysis of prostate-specific antigen and digital rectal examination as screening tests for prostate carcinoma. Journal of the American Board of Family Practice, 16(2):95-101.
Montaño, D.E., Kasprzyk, D., & Phillips, W.R. (2003). Primary Care Providers’ Role in HIV/STD Prevention. Final Report to the National Institute of Mental Health. Grant No. 5 R01 MH52997-04.
Montaño, D.E., Kasprzyk, D., Phillips, W.R., & John, L. (1998). Evaluation of Physicians’ Knowledge, Attitudes, and Practices Related to Screening for Colorectal Cancer. Final Report to the American Cancer Society.
Montaño, D.E. & Phillips, W.R. (1995). Cancer screening by primary care physicians: a comparison of rates obtained from physician self-report, patient survey, and chart audit. American Journal of Public Health, 85(6):795-800.
Moran, W.P., Cohen, S.J., Preisser, J.S., Wofford, J.L., Sheldon, B.J., & McClatchey, M.W. (2000). Factors influencing use of the prostate-specific antigen screening test in primary care. American Journal of Managed Care, 6:315-324.
National Guideline Clearinghouse (2003). Guideline synthesis: screening for prostate cancer. in: National Guideline Clearinghouse. National Guideline Clearinghouse [On-line]. Available: http://www.guideline.gov
St. Lawrence, J.S., Montaño, D., Kasprzyk, D., Phillips, W.R., Armstrong, K.A., & Leichliter, J. (2002). STD Screening, Testing, Case Reporting, and Clinical and Partner Notification Practices: A National Survey of US Physicians. American Journal of Public Health, 92(11):1784-1788.
Sirovich, B.E., Schwartz, L.M., & Woloshin, S. (2003). Screening men for prostate and colorectal cancer in the United States: does practice reflect the evidence? JAMA, 289(11):1414-1420.
Tambor, E.S., Chase, G.A., Faden, R.R., Geller, G., Hofman, K.J., Holtzman, N.A. (1993). Improving response rates through incentive and follow-up: The effect on a survey of physicians' knowledge of genetics. American Journal of Public Health, 83(11):1599-1603.
Tudiver, F., Guibert, R., Haggerty, J., Ciampi, A., Medved, W., Brown, J.B., Herbert, C., Katz, A., Ritvo, P., Grant, B., Goel, V., Smith, P., O'Beirne, M., Williams, J.I., & Moliner, P. (2002). What influences family physicians' cancer screening decisions when practice guidelines are unclear or conflicting? Journal of Family Practice, 51(9):760.
Weber, S.J., Wycoff, M.L., & Adamson, D.R.(1982). The Impact of Two Clinical Trials on Physician Knowledge and Practice. Arlington, VA: Market Facts, Inc.
Yammarino, F.J., Skinner, S.J., & Childers, T.L. (1991). Understanding mail survey response behavior: A meta-analysis. Public Opinion Quarterly, 55:613-619.
File Type | application/msword |
File Title | OMB Package |
Author | Crystal Freeman |
Last Modified By | arp5 |
File Modified | 2007-04-10 |
File Created | 2007-04-09 |