Part B_NATS_10_06_09

Part B_NATS_10_06_09.doc

National Adult Tobacco Survey

OMB: 0920-0828

Document [doc]
Download: doc | pdf


SUPPORTING STATEMENT FOR THE

NATIONAL ADULT TOBACCO SURVEY









PART B








Submitted by:

Martha Engstrom

Centers for Disease Control and Prevention

Department of Health and Human Services

Submitted April 2009

Revised October 6, 2009

TABLE OF CONTENTS


A. JUSTIFICATION


1. Circumstances Making the Collection of Information Necessary

a. Background

b. Privacy Impact Assessment Information

c. Overview of the Data Collection System

d. Items of Information to be Collected

e. Identification of Website(s) and Website Content Directed at Children Under 13 Years of Age


2. Purpose and Use of Information Collection

a. Purpose of Information Collection

b. Anticipated Uses of Results by CDC

c. Anticipated Uses of Results by Other Federal Agencies and Departments

d. Use of Results by Those Outside Federal Agencies

e. Privacy Impact Assessment Information

3. Use of Improved Information Technology and Burden Reduction


4. Efforts to Identify Duplication and Use of Similar Information

5. Impact on Small Businesses or Other Small Entities


6. Consequences of Collecting the Information Less Frequently


7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5


8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency

a. Federal Register Announcement

b. Consultations


9. Explanation of Any Payment or Gift to Respondents


10. Assurance of Confidentiality Provided to Respondents

a. Privacy Impact Assessment Information


11. Justification for Sensitive Questions


12. Estimates of Annualized Burden Hours and Costs

a. Estimated Annualized Cost to Respondents


13. Estimates of Other Total Annual Cost Burden to Respondents or Record Keepers


14. Annualized Cost to the Government


15. Explanation for Program Changes or Adjustments


16. Plans for Tabulation and Publication and Project Time Schedule

a. Tabulation Plans

b. Publication Plans

c. Time Schedule for the Project


17. Reason(s) Display of OMB Expiration Date is Appropriate


18. Exceptions to Certification for Paperwork Reduction Act Submissions


B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Respondent Universe and Sampling Methods


2. Procedures for the Collection of Information

a. Statistical Methodology for Stratification and Sample Selection

b Estimation and Justification of Sample Size

c. Estimation and Statistical Testing Procedures

d. Use of Less Frequent Than Annual Data Collection to Reduce Burden

e. Survey Instrument

f. Data Collection Procedures

g. Informed Consent

h. Quality Control


3. Methods to Maximize Response Rates and Deal with Nonresponse

a. Maximizing Response Rates

b. Dealing with Nonresponse

4. Tests of Procedures or Methods to be Undertaken


5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or

Analyzing Data

a Statistical Review

b. Agency Responsibility

c. Responsibility for Data Collection


REFERENCES

LIST OF APPENDICES

  1. Authorizing Legislation


  1. 60-Day Federal Register Notice


  1. 60-Day Federal Register Notice Comments


  1. Item-Level Justification for National Adult Tobacco Survey Questionnaire


  1. Consultants on the Development of the NATS


  1. National Adult Tobacco Survey Questionnaire (English)


  1. National Adult Tobacco Survey Questionnaire (Spanish)


  1. IRB Approval Letter


  1. Confidentiality Agreement Signed by Interviewers 


  1. Sample Table Shells


  1. Advance Letter



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


The NATS will develop state and national estimates of tobacco use behaviors and of exposure to pro- and anti-tobacco influences among non-institutionalized adults residing in the United States. In addition, national estimates for cell phone users also will be developed, and methods will be explored for integration of the cell phone estimates into national estimates.

B.1 RESPONDENT UNIVERSE AND SAMPLING METHODS


The universe for the study will consist of non-institutionalized adults (age 18 and over) residing in the 50 States and the District of Columbia (DC). The sampling design is driven by the need to generate precise state-level estimates. At the same time, national sample estimates will be achieved for subgroups defined by gender, age, and race/ethnicity, and will support analysis of the determining factors for smoking, and tobacco use in general, in the various subpopulations. Respondents will be selected through Random Digit Dialing (RDD) from two sampling frames: one for landlines and one for cell phones. The data collection will be conducted using Computer Assisted Telephone Interviewing (CATI).

B.2 PROCEDURES FOR COLLECTION OF INFORMATION

B.2.a Statistical Methodology for Stratification and Sample Selection


State Samples


The NATS sample will consist of a list-assisted RDD sample of telephone numbers. To build the list-assisted frame, all possible telephone numbers are divided into blocks (or banks) of 100 numbers. A 100-block is the series of 100 phone numbers defined by the last two digits of a 10-digit phone number. For phone numbers with the first eight digits in common, there are 100 possible combinations of the last two digits (ranging from 00-99). To enhance efficiency and reduce costs, the frame excludes zero-blocks, i.e., those 100- blocks with zero listed phone numbers.


Telephone numbers will be stratified into state-based strata according to the primary state served by the (area code and prefix). Within each state, telephone numbers will be further stratified into the high-density substratum or the low-density substratum based on whether the number is listed in local residential telephone directories or not. Telephone numbers listed in residential directories are most often working residential numbers, whereas unlisted telephone numbers include large numbers of non-working and nonresidential telephone numbers. To leverage this information, the high-density stratum will be oversampled at a 1.5-to-1 ratio relative to the low-density stratum. This oversampling increases the sampling efficiency by raising the percentage of working residential numbers selected in the sample. The sample will be selected in independent replicates to facilitate the control of the final number of completed interviews.



Cell Phone Sample


The cell phone sample will be an RDD sample of phone numbers from cell phone and cell/landline exchanges. The exchanges originate from the Telecordia® TPM™ Data Source. The cell phone exchanges and mixed-use exchanges are identified from exchange type. The NATS cell phone sample will be stratified implicitly by state to help control the geographic distribution of the sample.


B.2.b Estimation and Justification of Sample Size


This section provides justification for the state-levels and for the national sample sizes. The sample size development was driven by the need to provide required precision levels at the state level. In other words, we developed the minimum sample sizes that will ensure acceptable precision for every state. Note that this approach requires allocating a sufficient sample size even for small states (n=1,863 per state).


State-level estimates


Exhibit B-1 shows the precision that may be expected for state-level estimates with sample sizes of n=1,863 completed interviews for two design effect (DEFF)1 scenarios: DEFF=1.5 and DEFF=2.0. For RDD, list-assisted designs with minimal oversampling as planned for the NATS state samples, the DEFF is anticipated to be between 1.5 and 2.0, as further justified below. The precision is presented in terms of standard error of estimated prevalence rates (percentages or proportions).


Exhibit B-1

Standard Error of State-level Estimates


DEFF

1.5

2.0

Estimated Percent



5%

0.62%

0.71%

10%

0.85%

0.98%

20%

1.14%

1.31%

25%

1.23%

1.42%

50%

1.42%

1.64%


It is worth noting that for state-level estimates, confidence intervals will be within +/- 5 percentage points or better for all estimates at the 95% confidence level. Even for the worst scenario coupling a DEFF=2.0 and an estimate of 50%, standard errors are expected to be 1.64% or less, so that confidence intervals will be within 3.5% at the 95% confidence level. For percentages in the expected range for smoking prevalence—between 20% and 25%--the standard error is less than 1.5% even for the most conservative DEFF scenarios.


The expected precision was guided by similar state-level estimates of prevalence rates for tobacco use. Exhibit B-2 presents the current smoking prevalence rates for each state as estimated from the 2007 Behavioral Risk Factor Surveillance System (BRFSS). Exhibit B-3 presents estimated variability for these estimates in selected states. This exhibit shows the estimated standard error (SE) and design effect (DEFF) computed from 2007 BRFSS data for the subset of states whose samples are not regionally stratified; i.e., state samples with sampling designs similar to that planned for the NATS. These DEFF estimates, averaging 1.93, are expected to be comparable to those anticipated for the NATS.


National sample estimates

As the aggregate of the 51 state samples (50 states and the District of Columbia), the very large national sample will provide excellent precision for combined estimates based on more than 95,000 completed surveys. The national sample will provide very tight estimates overall and for subgroups defined by gender, race/ethnicity and age. It should be noted that state samples will be approximately equal in sample size—to ensure the precision of state estimates even for small states. Therefore, as a result of unequal weighting effects, the design effect (DEFF) for national estimates will be large. We emphasize, however, that the large DEFF is more than compensated for by the very large sample size that is available for the national sample. To quantify these two effects that work in different directions, we use an effective sample size defined as the total sample size divided by the DEFF. The effective sample size is the sample size equivalent to a simple random sample.


National estimates will be based on effective sample sizes between 10,000 (for those rare variables that have DEFFs as high as 9.0) and 45,000 (for those variables with DEFF=2.0). Exhibit B-4 presents standard errors for three illustrative sample sizes within this range of effective sample sizes: 10,000; 20,000 and 30,000. It is worth recalling that the effective sample size, defined as n/DEFF where n is the actual sample size (number of respondents), is the size of a simple random sample with equivalent precision.



Exhibit B-2

State-level Smoking Prevalence Estimates (2007 BRFSS)


State:

Percentage

State:

Percentage

Nationwide (States, DC, and Territories)

19.7

Missouri

24.5

Nationwide (States and DC)

19.8

Montana

19.5

Alabama

22.5

Nebraska

19.9

Alaska

22.2

Nevada

21.5

Arizona

19.8

New Hampshire

19.3

Arkansas

22.4

New Jersey

17.1

California

14.3

New Mexico

20.8

Colorado

18.7

New York

18.9

Connecticut

15.4

North Carolina

22.9

Delaware

18.9

North Dakota

20.9

District of Columbia

17.2

Ohio

23.1

Florida

19.3

Oklahoma

25.8

Georgia

19.4

Oregon

16.9

Guam

31.0

Pennsylvania

21.0

Hawaii

17.0

Puerto Rico

12.2

Idaho

19.1

Rhode Island

17.0

Illinois

20.1

South Carolina

21.9

Indiana

24.1

South Dakota

19.8

Iowa

19.8

Tennessee

24.3

Kansas

17.9

Texas

19.3

Kentucky

28.2

Utah

11.7

Louisiana

22.6

Vermont

17.6

Maine

20.2

Virginia

18.5

Maryland

17.1

Virgin Islands

8.7

Massachusetts

16.4

Washington

16.8

Michigan

21.1

West Virginia

26.9

Minnesota

16.5

Wisconsin

19.6

Mississippi

23.9

Wyoming

22.1


Exhibit B-3

Design Effect of Smoking Prevalence Estimates

States with No Geographic Stratification on the 2007 BRFSS


State

Design Effect (DEFF)

Arkansas

1.92

Colorado

2.04

District of Columbia

1.84

Illinois

1.77

Kansas

1.92

Minnesota

2.01

New York

2.03

Oregon

2.06

Vermont

2.06

West Virginia

1.63

Wyoming

1.95



Average

1.93


Exhibit B-4 Expected Precision for National Estimates for Various Design Effects (Effective Sample Sizes)



Effective Sample Size


Estimated Percent

10,000

20,000

30,000

5%

0.218%

0.154%

0.126%

10%

0.300%

0.212%

0.173%

20%

0.400%

0.283%

0.231%

50%

0.500%

0.354%

0.289%


As evidenced in Exhibit B-4, standard errors will be uniformly less than 0.50% for national estimates so that 95% confidence intervals will be consistently within +/- 1 percentage point.

At the national level, moreover, subgroup estimates will be within +/- 3 percentage points for subgroups defined by gender, race/ethnicity and age groups, as described below.



Exhibit B-5 presents standard errors of subgroup estimates for subgroups that encompass between 15% and 30% of the national population such as major racial/ethnic groupings and age categories. Effective sample sizes are computed for design effects in the range of 2 to 6 for various subgroups. In this way, for example:


  • Effective sample size = 5,000

May arise as a combination of either of the following scenarios, among others, for the subgroup sample size (n) and the DEFF:

When n=30,000 and DEFF=6; or

When n=20,000 and DEFF=4; or

When n=15,000 and DEFF=3.


  • Effective sample size = 10,000

May arise as a combination of either of the following scenarios, among others, for the subgroup sample size (n) and the DEFF:

When n=30,000 and DEFF=3; or

When n=20,000 and DEFF=2; or

When n=15,000 and DEFF=1.5.


Exhibit B-5 Expected Precision for National Subgroup Estimates for Different Effective Sample Sizes, n(eff)


Estimated Percent

n(eff)=5,000

n(eff)=10,000

5%

0.18%

0.25%

10%

0.24%

0.35%

20%

0.33%

0.46%

50%

0.41%

0.58%


B.2.c. Estimation and Statistical Testing Procedures


The survey data will be weighted for each state separately to generate state-level estimates. As described in the sampling sections, states are viewed as primary strata in the overall sampling design. The weights will account for differential probabilities of selection, and adjust for non-coverage and non-response.


The derivation of national estimates follows from the state-level estimates using the conceptual framework of stratified sampling (see, for example, Cochran, 1977, Chapter 5). Specifically, if an estimated mean or proportion for state(h) is designated by y(h), then the national estimate is the sum of S(h)*y(h) over all 50 states and the District of Columbia, where S(h) is the share of the population total accounted for by state-h.


Variance estimates can also be directly computed for the state-level and national sample estimates using any of the software available for survey data analyses; e.g. SUDAAN or SAS survey procedures (such as Survey Means.) For national estimation, these computations need to reflect the additional stratification level introduced by considering the states as primary strata.


In addition, a separate set of weights will be developed for the cell phone sample component. As part of the methodological investigation into combining cell sample and land line (RDD) sample components, we will also develop a combined set of weights for the integrated sample. This section briefly describes all these weighting procedures.


State-level weights will be computed for each state following the general formulation:

FINALWT = STRWT × (1/NPH) × NAD × POSTSTRAT


FINALWT is the final weight assigned to each respondent.


STRWT accounts for differences in the basic probability of selection among strata (subsets of area code/prefix combinations). It is the inverse of the sampling fraction of each stratum.


1/NPH is the inverse of the number of residential telephone numbers in the respondent’s household.


NAD is the number of adults in the respondent’s household.


POSTSTRAT is the number of people in an age-by-sex or age-by-race/ethnicity-by-sex category in the state divided by the sum of the preceding weights for the respondents in the same age-by-sex or age-by-race/ethnicity-by-sex category. It adjusts for noncoverage and nonresponse and forces the sum of the weighted frequencies to equal population estimates for the state.


B.2.d Use of Less Frequent Than Annual Data Collection to Reduce Burden


The NATS was designed from the outset as a one-time data collection.

B.2.e Survey Instrument


The NATS questionnaire (Appendix F/G), built around the Key Outcome Indicators Report, contains 157 items. The NATS comprehensively assesses use of many tobacco products. In addition it contains questions regarding demographics and existing conditions and diseases. Exhibit B-7 outlines the questionnaire topics and the number of questions in each topic area. The questions are in a multiple-choice format.


Exhibit B-7

Questionnaire Topics and Total Number of Questions per Topic



Total Number of Questions

Demographic Items

14

Screeners

29

General Health

1

Cigarette Smoking

28

Other Tobacco Use

12

Cessation

29

Secondhand Smoke and Tobacco-Free Policies

24

Chronic Conditions and Diseases

7

Opinions and Attitudes

9

Smoker Assistance and Sensitive Questions

4

Total

157



B.2.f Data Collection Procedures


The data collection procedures for the NATS are made up of several components, the application of which varies between landline vs. cell phone surveys. The components include: (1) advanced mailings; (2) loading the sample; (3) managing call attempts; (4) conducting the interview; (5) handling busy and no-answer; (6) attempting call backs; (7) managing refusals and interrupted interviews and; (8) recording call dispositions.


Advanced mailings: Advance letters (Appendix K) will be sent to all sampled households for which addresses can be obtained. Letters are addressed as “Dear Resident.” Respondent names and addresses will be printed on the envelopes, for a clean, professional appearance. All envelopes will be stamped with first class postage, sealed, and have the official CDC logo. For cell phone respondents, because cell phone numbers cannot be reverse-matched for addresses, sending an advance letter will not be possible.


Loading the Sample: The sample will be loaded and resolved monthly. Sample records will have been pre-screened to exclude business and non-working numbers.


Managing Call Attempts: Each call attempt will be given a minimum of five rings. Careful management of the sample allocation and scheduling of interview sessions will assure adequate penetration coverage of residential households with a minimum of 15 attempts for unresolved telephone numbers. Persistent “ring - no answers” will be attempted a minimum of four times at different times and days of the week. Each number will be called a minimum of 15 times over six calling periods or until a completed interview is achieved. If a respondent is contacted on the last call, and an interview cannot be completed, another attempt will be made. A six-attempt protocol for the cell phone sample will be conducted. A lower attempt protocol is recommended for cell phone sample for two reasons: First, because a random-respondent selection is not conducted on cell phone sample, more interviews are completed on the first contact. Second, refusal conversion will be limited to one additional attempt after an initial refusal.  Therefore, fewer attempts are needed to obtain completed interviews from cell phone sample as compared to landline sample.  


Conducting the Interview: A screener will be conducted at the beginning of each call. The screener consists of: (1) verification of phone number; (2) verification of private residence; and (3) random respondent selection. The screener will be modified for cell phone respondents to ensure we do not: (1) jeopardize safety (e.g., respondent is driving); (2) make duplicate calls (e.g., respondent has a land line that could be in the landline sample; (3) interview someone who is underage; (4) include random respondent selection.


Dealing with Busy and No-Answer: Lines that are busy will be called back a minimum of five times at 10-minute intervals. If the line is still busy after the fifth attempt, the number will be attempted again on different calling occasions until the record is resolved.


Attempting Call-backs: The NATS calling system optimizes queuing for definite call-backs by continuously comparing station sample activity and the index of definite call-back records. When a definite appointment time arrives, the system finds the next available station and delivers the record as the next call. The call history screen that accompanies each record informs the interviewer that the next call is a definite appointment and describes the circumstances of the original contact. The handling of call-backs to respondents is crucial to the success of any telephone survey project. The effective management of call-backs will increase the response rate and population coverage. Perhaps more importantly, scheduling an appointment that is convenient for the respondent, and ensuring that the appointment is kept, offers a basic courtesy to someone who has agreed to assist us with a study. Callbacks to cell phone users will be limited to one additional refusal attempt after an initial refusal.


Managing Interrupted Interviews: Interrupted interviews with receptive respondents will be restarted using a definite call-back strategy. A definite call-back for an exact time can be set and the interview can begin where it left off. If the interviewer who began the survey is available at the prescribed time, the system will send the call back to that station.


Recording Call Dispositions: Dispositions of each call attempt on all records in the sample will be automatically stored in the CATI system. This provides a complete call history for each record in the sample. The call history is displayed on the interviewer’s screen during each new attempt.

B.2.g Informed Consent

Before each interview, the interviewer will read the informed consent (included in Appendix F/G as part of the NATS Questionnaire) to each participant. The consent form describes the interview, the types of questions that will be asked on the actual survey, the risks and benefits of participation, and participants’ rights, and it provides information on whom to contact with questions about any aspect of the study. The consent form also indicates that participation is completely voluntary and that participants can refuse to answer any question or discontinue the interview at any time without penalty or loss of benefits. The interviewer will enter a code via the keyboard to signify that the participant was read the informed consent script and agreed to participate.

B.2.h Quality Control


Exhibit B-8 lists the major means of quality control. As shown, the task of collecting quality data begins with clear and explicit testing of CATI programs and ends with procedures for the cleaning, coding, and verification of collected data. Once the project begins and prior to interviewer training, an advance letter will be sent to land line RDD respondents in preparation for calling. Subsequent to interviewer training, efforts will be taken to reinforce training, monitor interviewer performance and generate tracking reports. Because the ultimate aim is production of a high quality database and reports, various quality assurance activities will be applied.


B.3 METHODS TO MAXIMIZE RESPONSE RATES AND DEAL WITH NONRESPONSE


Response rates are an important indicator of data quality. OMB generally regards studies with higher response rates as offering more representative data. At the same time, OMB has acknowledged repeatedly in its own guidance documents that the range of likely feasible response rates are largely a function of the objectives of a study and of the methodology required. OMB also sets no pre-determined minimum required response rate across surveys of all types, recognizing that some types of surveys, such as population-based CATI surveys, necessarily should be expected to achieve lower response rates than surveys involving many other data collection methods. Moreover, OMB has recognized that CATI survey response rates have been declining in recent years for a variety of reasons, but serve an important purpose and need to be included in the mix of methods used to gather population-based data.

Exhibit B-8

Quality Control Procedures


Survey Step

Quality Control Procedures

Testing of CATI program

  • Test each response to each question, and each path through the survey (100%)

  • Review frequencies from randomly generated data to ensure that the program is organizing data properly and recording values according to the survey specification (100%)

  • Develop skip check program to check data against defined conditions specified in the Microsoft Word version of the questionnaire (100%)

  • Provide CDC with an electronic test version of the programmed survey (100%)

CATI pretest

  • Pretest of 100 interviews to ensure the CATI program is working properly and to verify questionnaire content, skip patterns, value verification, consistency of answers across questions, interviewer and supervisor training, and sample management procedures

Advance letters


  • Verify that envelopes are stamped with first class postage, sealed, and have the official OSH logo (5% sample)

CATI quality assurance

  • Monitor at least 10% of all interviews (10% sample)

  • Monitor each interviewer at least once per week (100%)

  • Assign supervisors to manage a team of no more than 10 interviewers (100%)

  • Participate in daily briefing call with Command Center (100%)

  • Review call center shift reports and internal project tracking reports daily (100%)

Preparation of data files

  • Identify incomplete interviews and merge back into the main data file (100%)

  • Clean and, when applicable, back-code open ended responses (100%)

  • Assign a final disposition to each record (100%)

  • Produce frequency tabulations of every question and variable to detect missing data or errors in skip patterns (100%)


Two different kinds of response rates are used in CATI studies.  The Cooperation Rate (CR) is the proportion of all respondents interviewed of all eligible units in which a respondent was selected and actually contacted.    Non-contacts are excluded from the denominator.   This rate is based on contacts with households containing an eligible respondent.  For the landline RDD sample, we expect to attain a CR of 50% to 80%, varying per state, with a mean of 65% to 70%.  For the cell phone RDD sample, we expect to attain a CR of 40% to 70%, with a mean of 55% to 60%.  A Response Rate (RR) is an outcome rate with the number of completed interviews in the numerator and an estimate of the number of eligible units in the sample in the denominator.  For the landline RDD sample, we expect to attain an RR of 40% to 50%, varying per state, with a mean of 45%.  For the cell phone RDD sample, we expect to attain an RR of 30% to 40%, with a mean of 35%.    


B.3.a Maximizing Response Rates


Actions taken to maximize response rate differ from actions taken when we encounter non-response. A number of steps will be taken to counter the widespread experience of heightened difficulty in attaining high response rates in CATI surveys.


In each state, we will identify “partners,” i.e., agencies of State government and voluntary associations (e.g., the American Cancer Society) who we can mention as organizations endorsing the value of conducting NATS. Whenever feasible, after cross referencing selected landline numbers with addresses contained in reverse telephone directories, advance letters will be sent to households associated with selected landline numbers to create a climate of receptivity toward the actual call and allow households to contact us first. The advance letters and phone calls will place primary emphasis on the intrinsic value of the data in helping to address one of the major health threats facing Americans. We will provide phone coverage of days, evenings and weekends to provide a range of times to meet differences in personal schedules. The bulk of calling will be done during the most productive calling hours; i.e., evenings and weekends, with only 10% to 20% on weekdays. Our automated calling system for the landline study will manage calling times to ensure that respondents who cannot be reached at one time of day are tried at other times of day. If a persistent busy is encountered at one time of day, we will switch to another time of day. When feasible, a caller who previously spoke to a selected respondent will be given the call to complete the actual interview. At each attempt, the interviewer can see the complete call history of call times and dispositions. Selected respondents will be allowed to call in at their convenience or to wait for our call to complete an interview. At least 15 attempts will be made on each unresolved number.


The established practice of providing a $10 incentive to cell phone respondents helps to keep response rates high while allowing anonymity of the respondent to be maintained because no contact information is necessary to provide it. Gift codes are purchased from Amazon.com. At the conclusion of the interview, the interviewer can either verbally give the code over the phone, or text message it to the respondent.


For the cell phone RDD, up to 6 attempts will be made to reach and interview the selected cell phone number. Call attempts are spread out over days and evenings throughout various days including weekdays, Saturdays, and Sundays. For each attempt, the outcomes of the previous attempts are displayed so the interviewer knows the call history prior to making contact.


Additional efforts to achieve maximum participation on the NATS will include: (1) utilizing a dedicated team of specially trained interviewers adept at conducting the ATS; (2) providing interviewers who can conduct interviews in English or Spanish; (3) making scheduled call-backs the highest calling priority; (4) conducting weekly refresher trainings for all data collection staff; and (5) leaving messages on persistent “answering machine” dispositions, informing respondents of the study and scheduling another call attempt for the following day.

Throughout the calling process, we will reevaluate efforts to maximize participation to identify measures that are working best. Supplemental measures may be employed to maximize participation rates. Such measures may include: (1) providing a project menu of Interactive Voice Recognition (IVR) options, so that respondents who wish to learn more about a study or verify its legitimacy may access an IVR system specifically dedicated to that project (complete with its own toll-free number); (2) expanding calling hours and; (3) attempting to contact automated privacy managers installed to block solicitors from calling. If a message cannot be left, the interviewers are instructed to enter the call center’s toll-free telephone number.

B.3.b Dealing with Nonresponse


An important component in maximizing response is having strategies for dealing with non-response, either in terms of refusal conversion efforts or analyses of data to detect biases.   The underlying philosophy behind refusal conversion is that a large proportion of initial refusals are situationally based (e.g., the respondent is on another call or just got home from work and is eating dinner).   If attempted again, at another time of day, the person may be more responsive and accept the interview.  A respondent also may refuse due to a language barrier.  A non-response conversion team, specifically trained in refusal conversion on the NATS, will call back 100% of respondents who make an initial refusal. They will have the benefit of detailed notes taken by the caller who encountered the initial refusal about the articulated reason for the refusal. If an initial refusal was made before a respondent was selected, up to two more attempts will be made to convert the refusal. After the second “soft” refusal,” the record will be transferred to the refusal conversion unit for a final attempt by an interviewer from the non-response conversion team. If the initial refusal came from a selected respondent, the record will be transferred immediately to the refusal conversion study for a final attempt by an interviewer from the non-response conversion team.  Respondents who refuse at this point will be considered a “hard” refusal and not called back again.  Staff will be assigned to the non-response conversion team based on experience and performance.  


Survey nonresponse bias occurs when respondents are substantively different from the nonrespondents. Response rates are often used as a measure of data quality because they are thought to reflect the degree to which non-response bias exists in the data, but this connection is tenuous.2,3 Instead, response rates are a measure of the risk of nonresponse. High response rates reflect low risk of nonresponse bias while high response rates increase the risk of nonresponse. In the absence of high response rates, a nonresponse analysis helps to justify the accuracy of the survey data.


As a whole, the field of survey research has been experiencing declining response rates over recent years. Bias will be present in NATS if the nonrespondents are different from the respondents in terms of the statistics of interest. In 2008, ICF Macro conducted a nonresponse follow-up (NRFU) to the Maryland Adult Tobacco Survey (MATS) on behalf of the Maryland Department of Health. The justification for the research was to analyze if the nonrespondents were different than the respondents. The research concluded that the respondents and nonrespondents were different in terms of smoking statistics, but much of the difference was explained by demographic differences between the respondents and nonrespondents. In turn, the weighting algorithm which corrected for known demographic biases in RDD surveys corrected for the differences in smoking characteristics between the nonrespondents and respondents.4


For the NATS, a NRFU survey is not planned. Instead, we intend to evaluate the extent of nonresponse bias using external data sources.  Many of these comparisons are naturally inherent in the process of poststratification and weighting for nonresponse and noncoverage. For the weighting process, the comparisons typically focus on age, sex, race, Hispanic origin, education status, and marital status within each state.  The data for these comparisons will be based on the American Community Survey (ACS).

 

The landline sample records contain two variables that could be used to explicitly adjust for nonreponse—listed or not-listed status and a metropolitan status code. We will weight for differential non-response for the categories of each of these variables and determine their effect, individually and jointly, on the bias of selected demographic characteristics and on the estimates and variances of key substantive variables. A decision as to whether or not to make explicit nonresponse adjustments using listed or not-listed status and the metropolitan status code will be based on these analyses.

 

The NATS has a limited set of survey questions that overlap with other data sources including the National Health Interview Survey (NHIS) and the Current Population Survey Tobacco Use Survey (CPS-TUS).  Both the NHIS and CPS-TUS are valuable in quantifying nonresponse, but both have limitations.  The NHIS and CPS-TUS include computer assisted personal interviewing and achieve extremely high response rates.  These surveys are less susceptible to bias due to nonresponse, but observed differences when comparing to NATS maybe confounded with the mode of survey administration. 

 

The CPS-TUS has substantial overlapping content with NATS including smoking status, quit attempts and cessation, smoking in the home and at work, and attitudes toward smoking in public places.  Further, the CPS-TUS can be analyzed at the state level.  However CPS-TUS was last conducted in 2006-2007.  Observed differences may be confounded with trends in tobacco behaviors and attitudes.  The NHIS is more contemporary, but limited to smoking status and quit attempts. Further, the NHIS only supports data analysis at the national and regional level.

 

Through the use of auxiliary variables and demographic and limited substantive comparisons with ACS, NHIS, and CPS-TUS ,  we will assess the risk of nonresponse bias in the NATS. Despite the stated limitations,  these data sources provide valuable benchmarks for NATS.  Substantial deviations from these benchmarks will be explored further to 1) better understand the nature of the differences (e.g. do they vary across subgroups); 2) evaluate whether the differences are caused by nonresponse (and/or noncoverage) error or if there are other reasons that could explain the differences; and 3) if necessary, develop additional weighting adjustments to mitigate the risk of nonresponse bias on NATS estimates. Ultimately the nonresponse analyses will inform the survey weighting and identify limitations in the data that will be communicated to stakeholders.


B.4 TESTS OF PROCEDURES OR METHODS TO BE UNDERTAKEN


The NATS was developed in the summer of 2008 based on eight years of experience with the ATS by 25 States, with technical guidance from CDC. The ATS was significantly reconfigured to create the NATS. As part of this process, in accord with OMB guidelines, NATS questionnaire items were subjected to cognitive interviewing and analyses by the contractor in the Fall 2008 and Winter 2009, first in English, then in Spanish. This cognitive analysis resulted in the revision, addition, or deletion of response options and the revision or deletion of certain questions, with the overall effect of improving the clarity of questions and lowering respondent burden. Following cognitive interviewing, the finalized questionnaire underwent a limited pretest in Prince George’s County, Maryland in accord with OMB guidelines. The pretests sharpened the articulation of certain survey questions and confirmed the empirical estimate of the survey burden.

B.5 INDIVIDUALS CONSULTED ON STATISTICAL ASPECTS AND INDIVIDUALS COLLECTING AND/OR ANALYZING DATA

B.5.a Statistical Review


Statistical aspects of the study have been reviewed by the individuals listed below.

Peter Mariolis, Ph.D.
Centers for Disease Control and Prevention (CDC)
National Center for Chronic Disease Prevention and Health Promotion

Office on Smoking and Health
4770 Buford Highway Mailstop K-50
Atlanta, Georgia 30341
(770) 488-5749
pmariolis@cdc.gov

Ronaldo Iachan, Ph.D.

Macro International Inc.

11785 Beltsville Drive, Suite 300

Calverton, MD 20705

Ronaldo.Iachan@macrointernational.com

(301) 572 0538


B.5.b Agency Responsibility

Within the agency, the following individual will be responsible for receiving and approving contract deliverables and will have primary responsibility for data analysis:

Martha C. Engstrom
Centers for Disease Control and Prevention (CDC)
National Center for Chronic Disease Prevention and Health Promotion

Office on Smoking and Health
4770 Buford Highway Mailstop K-50
Atlanta, Georgia 30341

(770) 488-5749
mengstrom@cdc.gov

B.5.c Responsibility for Data Collection


The representative of the contractor responsible for conducting the planned data collection is:


Naomi Freedner, M.P.H.

Macro International Inc.

26 College Street

Burlington, VT 05401

Naomi.L.Freedner@macrointernational.com

(802) 863-9600

REFERENCES



Abreu, D.A., and Winters, F. (1999) “Using Monetary Incentives to Reduce Attrition in the Survey of Income and Program Participation. “ Proceedings of the Survey Research Methods Section of the American Statistical Association.

CDC (2006). National Strategic Plan for Tobacco Control - FY2006-FY2008. Atlanta, GA: CDC.

CDC (2007). Best Practices for Comprehensive Tobacco Control Programs – 2007. Atlanta, GA: U.S. Department of Health and Human Services, CDC, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health.

CDC (2008a). Smoking-attributable mortality, years of potential life lost, and productivity loses – United States, 2000-02004. MMWR; 57(45):1226-1228.

CDC. (2008b). Cigarette Smoking Among Adults – United States, 2007. MMWR; 57(45):1221-1226.

CDC. (2008c). State-specific Prevalence and Trends in Adult Cigarette Smoking- United States, 1998-2007. MMWR; 58 (9): 221-226.

CDC, NCHS (2008). Public use data file and documentation: multiple causes of death for ICD-10 2005 data [CD-ROM].

Creighton, K. P., King, K. E., & Martin, E. A. (2001). The Use of Incentives in Census Bureau Longitudinal Surveys (No. 2007-2). Washington: U.S. Census Bureau.


J. Michael Brick, Pat D. Brick, Sarah Dipko, Stanley Presser, Clyde Tucker, and Yangyang Yuan. Cell Phone Survey Feasibility in The U.S.: Sampling and Calling Cell Numbers Versus Landline Numbers. Public Opinion Quarterly 2007 71: 23-39


Ryan, H., Wortley, P.M., Easton, A., Pederson, L., Greenwood, G. (2001). Smoking among lesbians, gays, and bisexuals: A review of the literature. American Journal of Preventative Medicine, 21, 142-149.


Starr, G, T Rogers, M Schooley, S Porter, E Wiesen & N Jamison (2005). Key Outcome Indicators for Evaluating Comprehensive Tobacco Control Programs. Atlanta, GA: CDC.


USDHHS (2000). With understanding and improving health and objectives for improving health. In: Healthy People 2010. Washington, DC: U.S. Department of Health and Human Services.

USDHHS (2004). The Health Consequences of Smoking: A Report of the Surgeon General. Atlanta, GA: U.S. Department of Health and Human Service, Public Health Service, CDC, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health.

1 The design effect (DEFF) is defined as the actual sampling variance divided by the variance that would be attained for a simple random sample of the same size. The DEFF, which equal 1.0 for simple random sampling, is a measure of the extra variability induced by complex sampling designs.

2 Curtin, R., Presser, S., & Singer, E. (2000). The Effects of Response Rate Changes on the Index of Consumer Sentiment. Public Opinion Quarterly , 413-428.

3 Groves, R. (2006). Non-response Rates and Non-response Bias in Household Surveys. Public Opinion Quarterly , 646-675

4 Freedner, N. R. ZuWallack, J. Dayton, J. Ross. (2009) Effects of Nonresponse by Smokers in Lowering Adult Tobacco Survey vs. Behavioral Risk Factor Surveillance System Smoking Estimates. Presentation at the 64th Annual Conference of the American Association of Public Opinion Research (AAPOR), May 14-19, Hollywood, FL.

4



File Typeapplication/msword
File TitleSUPPORTING STATEMENT FOR THE
AuthorKatherine.H.Flint
Last Modified Byarp5
File Modified2009-10-06
File Created2009-10-05

© 2024 OMB.report | Privacy Policy