SUPPORTING STATEMENT OF THE REQUEST FOR
OMB REVIEW AND APPROVAL OF THE
Registration of Individuals Displaced by the Hurricanes
Katrina and Rita Exposures Registry (Pilot Registry)
Part B
Vinicius Antao, MD, MSc, PhD
Team Lead
Environmental Health Surveillance Branch
Division of Toxicology and Human Health Sciences
Agency for Toxic Substances and Disease Registry
4770 Buford Hwy., Mailstop F-57
Atlanta, GA 30341
Office: 770-488-0555
Fax: 770-488-7187
January 22, 2013
Section B. Collections of Information Employing Statistical Methods
There are three main objectives for the pilot study: 1) using survey response rates for locating the population of interest and determining success in enrolling the population of interest; 2) verifying the information in the FEMA database using registrant responses for three variables: trailer type, county in which the trailer was located, and city in which the trailer was located describing the survey response rates; and 3) determining a person time exposure variable. To meet these objectives, this pilot study will use the statistical methods described here.
B.1. Respondent Universe and Sampling Methods
This section will discuss the target and survey population; sampling, reporting, and analytic unit; sampling frame; sample size adjustments; and the general survey design.
Target and Survey Population
The target population is all people, adults and children, who resided in FEMA-supplied temporary housing units for at least one week in the aftermath of either Hurricanes Katrina or Rita. Fortunately, the sampling frame we will use has virtually complete coverage of target population. Consequently, the target population and the survey population are essentially the same.
Sampling, Reporting, and Analytic Unit
The sampling unit will be the registration identification number provided by FEMA. The registration identification number is a unique identifier for the person who registered for a temporary housing unit. For example, if someone registered for more than one temporary housing unit, each of the temporary housing units will have the same registration identification number. We will refer to the person who registered for the temporary housing unit as the registrant. The registrant will be the reporting unit. That is, the registrant will provide information about all the people, adults and children, who lived in the temporary housing unit. Therefore, the analytic unit will be a person.
Sampling Frame Development
The sample will be based on the Federal Emergency Management Agency (FEMA) database provided by Centers for Disease Control and Prevention to RTI, the contractor. The FEMA database is a list of adult applicants for temporary housing units, where each adult represents a household that lived in a temporary housing unit. Each applicant has a unique registration identification number. For registration identification numbers that had multiple observations in the database, one observation was selected at random so that each observation in the database represented a unique registration identification number. This resulted in a database that contained 118,684 observations. For the feasibility study, sample selection will occur in Alabama, Louisiana, Mississippi, and Texas. The database has 114,292 observations with a geocoded address in Alabama (2,447), Louisiana (70,832), Mississippi (34,482), and Texas (6,531).
Stratification
For the Katrina Pilot Registry feasibility study, the explicit stratification will consists of designated counties/parishes. That is, designated counties/parishes will be the sampling strata. There is one county in Alabama, three parishes in Louisiana, three counties in Mississippi, and six counties in Texas designated to be in the feasibility study. In each state the counties/parishes are contiguous. Exhibit 1: Feasibility Study Counties/Parishes lists the counties/parishes that will be included in the feasibility study and the number of applicants in each county/parish.
Exhibit 1. Feasibility Study Counties/Parishes
State |
County, State |
Applicants |
|
|
|
Alabama |
Mobile, AL |
1,788 |
|
|
|
Louisiana |
Orleans, LA |
24,239 |
Louisiana |
Jefferson, LA |
19,504 |
Louisiana |
St. Tammany, LA |
11,889 |
|
|
|
Mississippi |
Harrison, MS |
11,577 |
Mississippi |
Jackson, MS |
8,928 |
Mississippi |
Hancock, MS |
7,451 |
|
|
|
Texas |
Jefferson, TX |
1,604 |
Texas |
Orange, TX |
953 |
Texas |
Hardin, TX |
522 |
Texas |
Jasper, TX |
435 |
Texas |
Tyler, TX |
245 |
Texas |
Newton, TX |
175 |
The counties/parishes represent a mix of rural and urban parishes/counties. Within each of these counties/parishes, we will use implicit stratification by Census tract to allocate the sample within the explicit sampling strata. There was a desire to restrict the counties/parishes to contiguous counties/parishes within a state in order to have a concentrated outreach media campaign informing residents of this study.
Sample Allocation
The sample size for the feasibility study was set at 10,000 applicants. The sample will be allocated proportionally based on the number of applicants across Alabama, Louisiana, Mississippi, and Texas. About 2% of the sample will be allocated to Alabama, about 62% to Louisiana, about 31% to Mississippi, and about 4% to Texas. These percentages represent the approximate population proportions of the applicants based on the applicant counts for Alabama (2%), Louisiana (62%), Mississippi (30%), and Texas (6%). Within each of the states, the sample will be allocated proportionally to the designated counties/parishes within the state. Within each of the designated counties, the sample will be proportionally allocated to the Census tracts.
B.2. Procedures for the Collection of Information
This section will discuss the target and survey design in detail and power calculations.
Applicant Selection
We will use a probability sampling design. The sample design will be stratified simple random sampling of unique registration identification numbers with proportional allocation to counties/parishes based on the number of unique registration identification numbers in each county/parish. The probability of selection for the unique registration identification number will be the number of unique registration identification numbers selected for the sample in a sampling stratum divided by the total number of unique registration identification numbers in the sampling stratum. That is, the probability of selection for the ith unique registration identification number in the hth sampling stratum is, phi, will be
,
where nh is the number of unique registration identification numbers selected for the sample in the hth sampling stratum and Nh is the total number of unique registration identification numbers in the hth sampling stratum. The design weight for a unique registration identification number will be the inverse of the unique registration identification number probability of selection. That is, the design weight for the for the ith unique registration identification number in the hth sampling stratum, dhi, will be
.
Exhibit 2. Actual Sample Size Allocation, Probability of Selection, and Design for the Feasibility Study shows the counties/parishes in feasibility study, number of registrants, the actual sample allocated, the probability of selection, and design weight for these counties/parishes.
Exhibit 2. Actual Sample Size Allocation, Probability of Selection, and Design Weight for the Feasibility Study
State |
County, State |
Registrants |
Actual Sample Size |
Probability of Selection |
Design Weight |
|
|
|
|
|
|
Alabama |
Mobile, AL |
1,788 |
200 |
0.1119 |
8.9400 |
|
|
|
|
|
|
Louisiana |
Orleans, LA |
24,239 |
2,714 |
0.1120 |
8.9310 |
Louisiana |
Jefferson, LA |
19,504 |
2,184 |
0.1120 |
8.9310 |
Louisiana |
St. Tammany, LA |
11,889 |
1,331 |
0.1120 |
8.9310 |
|
|
|
|
|
|
Mississippi |
Harrison, MS |
11,577 |
1,296 |
0.1120 |
8.9310 |
Mississippi |
Jackson, MS |
8,928 |
1,000 |
0.1120 |
8.9310 |
Mississippi |
Hancock, MS |
7,451 |
834 |
0.1120 |
8.9310 |
|
|
|
|
|
|
Texas |
Jefferson, TX |
1,604 |
179 |
0.1118 |
8.9409 |
Texas |
Orange, TX |
953 |
107 |
0.1118 |
8.9409 |
Texas |
Hardin, TX |
522 |
58 |
0.1118 |
8.9409 |
Texas |
Jasper, TX |
435 |
49 |
0.1118 |
8.9409 |
Texas |
Tyler, TX |
245 |
27 |
0.1118 |
8.9409 |
Texas |
Newton, TX |
175 |
20 |
0.1118 |
8.9409 |
|
|
|
|
|
|
Total |
|
89,310 |
9,999 |
|
|
Verification Rates for the Katrina and Rita Exposures Registry Feasibility Study
The analytic object is to verify the information in the FEMA database using registrant responses for three variables: trailer type, county in which the trailer was located, and city in which the trailer was located. For each of these three variables, we will construct an indicator variable that has a value of one if the database and registrant agree and has a value of zero if they do not agree. For each indicator variable, the mean of the indicator variable will provide the verification rate. The outcome of interest is whether or not the verification rate is greater than or equal to 0.75. Consequently, the power calculations will use the null hypothesis is that the verification rate is less than or equal to 0.75, and the alternative hypothesis is that the verification rate is greater than 0.75.
Power Calculations
The power calculations were produced using PASS 2011a software. Given the analytic, or respondent, sample size of 2,829 for the KARE sample, we would be able to detect a difference of 0.02 (or a verification rate of 0.77) with 80% power. That is, we would be able to state that the verification rate is greater than 0.75 using a one-sided binomial test. The target significance level is 0.05. The actual significance level achieved by this test is 0.0499. These results assume that the population proportion under the null hypothesis is 0.75. We will round 2,829 to 3,000 in the following calculations.
Exhibit 3 is a graphical representation of the relationship between analytic sample size and verification rate. The vertical axis (n) is the sample size necessary to detect the minimal difference between the null hypothesis verification rate (0.75) and the observed verification rate (P1). The horizontal axis (P1-P0) is the difference between the observed verification rate (P1) and the hypothesized verification rate (P0 = 0.75).
Exhibit 3. Sample Size and Minimum Detectable Difference
Exhibit 4 shows the sample size (n) and difference (P1 – P0) represented graphically in Exhibit 1, as well as, some additional variables related to the power calculations.
Exhibit 4. Sample and Difference in the Observed and Hypothesized Verification Rates
Power |
n |
Proportion |
Proportion |
Difference (P1 - P0) |
Target |
Actual |
Beta |
Reject
H0 |
0.80 |
2,829 |
0.75 |
0.77 |
0.02 |
0.05 |
0.05 |
0.20 |
2,160 |
0.80 |
1,242 |
0.75 |
0.78 |
0.03 |
0.05 |
0.05 |
0.20 |
957 |
0.80 |
697 |
0.75 |
0.79 |
0.04 |
0.05 |
0.05 |
0.20 |
542 |
0.80 |
437 |
0.75 |
0.80 |
0.05 |
0.05 |
0.05 |
0.20 |
343 |
0.80 |
299 |
0.75 |
0.81 |
0.06 |
0.05 |
0.05 |
0.20 |
237 |
0.81 |
220 |
0.75 |
0.82 |
0.07 |
0.05 |
0.05 |
0.19 |
176 |
0.80 |
167 |
0.75 |
0.83 |
0.08 |
0.05 |
0.05 |
0.20 |
135 |
0.81 |
130 |
0.75 |
0.84 |
0.09 |
0.05 |
0.05 |
0.19 |
106 |
0.80 |
103 |
0.75 |
0.85 |
0.10 |
0.05 |
0.05 |
0.20 |
85 |
0.81 |
84 |
0.75 |
0.86 |
0.11 |
0.05 |
0.05 |
0.19 |
70 |
0.81 |
70 |
0.75 |
0.87 |
0.12 |
0.05 |
0.04 |
0.19 |
59 |
0.82 |
60 |
0.75 |
0.88 |
0.13 |
0.05 |
0.05 |
0.18 |
51 |
0.82 |
50 |
0.75 |
0.89 |
0.14 |
0.05 |
0.05 |
0.18 |
43 |
Selection Sample Size
The analytic sample size is the sample size required to meet the analytic object of calculating the verification rate. The selection sample size is the analytic sample size adjusted for non-contact, ineligibility, non-cooperation, and attrition. Exhibit 5 shows the adjustments to the analytic sample size, expected rates, and sample count. The selection sample size will be about 10,000. We rounded up the analytic sample size from 9,915 to 10,000 to account for any uncertainty in the expected rates for the adjustments.
Exhibit 5. Analytic Sample Size, Sample Size Adjustments, and Selection Sample Size
Sample |
Adjustment |
Rate |
Count |
Selection Sample |
|
|
9,915 |
|
Retention |
0.65 |
6,445 |
|
Cooperation |
0.70 |
4,512 |
|
Eligibility |
0.95 |
4,286 |
Analytic Sample |
Contact |
0.70 |
3,000 |
B.3. Methods to Maximize Response Rates and Deal with Nonresponse
Methods to Deal with Nonresponse
The data collection contractor (RTI) will customize standard data collection processes and reports to maximize response rates and evaluate progress and cost effectiveness of different data collection procedures for the pilot registry. The main steps will include:
Interviewer training: Implementing a quality training program for interviewing and tracing staff;
Outreach: A targeted outreach campaign will be launched prior to the start of the pilot registry;
Lead Letter: Mailing an informative lead letter and brochure to all list sample members announcing the start of data collection and encouraging calls to the registry toll-free number;
Tracing: Implementing a rigorous tracing program to find hard to reach registrants; and
Monitoring and reporting: Evaluating data collection progress to meet targeted completed interview goals.
Interviewer Training. Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving interviewer training has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). The following interviewing procedures will be used to maximize response rates:
Interviewers will be briefed on the potential challenges of administering a survey 7 years after potential registrants will have experienced Hurricanes Katrina and Rita. Well-defined conversion procedures will be established.
If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion staff are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. Conversion staff will be able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.
A toll-free number, dedicated to the project, will be established so potential registrants may call to confirm the pilot registry’s legitimacy.
Refusal avoidance training will take place approximately 2-4 weeks after data collection begins. During the early period of fielding the survey, supervisors, monitors, and project staff will observe interviewers to evaluate their effectiveness in dealing with respondent objections and overcoming barriers to participation. They will select a team of refusal avoidance specialists from among the interviewers who demonstrate special talents for obtaining cooperation and avoiding initial refusals. These interviewers will be given additional training in specific techniques tailored to the interview, with an emphasis on gaining cooperation, overcoming objections, addressing concerns of gatekeepers, and encouraging participation. If a respondent does refuse to be interviewed or terminates an interview in progress, interviewers will attempt to determine their reason(s) for refusing to participate, by asking the following question: “Could you please tell me why you do not wish to participate in the registry?” The interviewer will then code the response and any other additional relevant information. Particular categories of interest include “Don’t have the time,” “Inconvenient now,” “Not interested,” “Don’t participate in any surveys,” and “Opposed to government intrusiveness into my privacy.”
Outreach. The outreach effort will engage the community and motivate potential registrants to participate. This effort will help overcome any distrust that many of the affected communities and individuals may have toward research, even before the response to the hurricanes. We propose to use updated addresses identified via tracing to map the clustering/dispersion of all sampled cases after batch tracing. An examination of geographic clusters will guide the selection of cost-effective outreach methods for a given area. Approach includes:
Public health outreach to nongovernmental organizations (NGOs), community organizations, and key stakeholders to obtain their support of the Pilot Registry and gain their assistance to inform potential registrants and encourage them to participate, and
A media campaign to include print advertisements, public-service announcements, and other means of informing potential registrants about the Pilot Registry.
Lead Letters. After start of the outreach campaign, RTI will mail a personalized letter to inform households about a forthcoming telephone call and give them a general description of the survey being conducted. Lead letters have been shown to increase survey response rates (De Leeuw 2007). The letter will describe the purpose of the survey will: 1) inform sample members of the purpose of the registry; 2) provide useful information regarding the registry; and 3) include a toll-free telephone number that potential registrants can call if they have questions.
Tracing. Sample mobility is a familiar challenge to health registry data collection efforts. To rigorously determine the effects of pilot study participation, potential registrants will be tracked so they can be interviewed. Tracing will be implemented in progressive steps beginning with batch tracing which will involve submitting all cases through a database for updated addresses and telephone numbers prior to beginning data collection. If updated tracing information does not yield success during data collection, those cases will be sent to in house interactive tracing experts who have access to a variety of databases to locate and verify current addresses and telephone numbers. Interactive tracing specialists contact use crisscross directories to identify new contact information, and contact directory assistance for possible updates and use a management system to keep a history of calls to subjects and contact.
Monitoring and Reporting. RTI will hold weekly QC meetings with interviewers and supervisors to discuss data collection progress and issues. Our experience has shown that these sessions build rapport and enthusiasm among interviewers and project staff, allow project staff to identify important refusal conversion strategies, assist in the refinement of the instrument, and provide ongoing training for staff. Such meetings have identified previously unrecognized problems with a CATI instrument, such as questions that the respondent does not understand, questions that are difficult to administer, and software problems. These sessions also provide feedback on the data collection procedures and systems.
The contractor. RTI, will periodically review data frequencies from the CATI survey to ensure that the program is working as intended and also to identify areas for interviewer feedback. They will review for high item-level nonresponse rates, recording of complete verbatim responses and contact information, and questions that may be unclear or confusing to interviewers and sample members.
Expected Response Rates. Given the challenges related to contacting a cohort 7 years after experiencing Hurricanes Katrina and Rita, we anticipate the response rate building measures discussed above will yield 70% response rate vs. the standard 80% response rate. We base that estimate on past registries including the World Trade Center Health Registry which enrolled over 70,000 registrants for their baseline survey and achieved a response rate of over 60%. The World Trade Center Health Registry was established in 2003 and sought to enroll any people that had been exposed to the World Trade Center disaster. This registry was able to reach and enroll over 60% of those exposed (Pulliam et al 2006). We expect to achieve a higher response rate for the Katrina Pilot Registry based on the more complete initial locating information and additional follow-up attempts to encourage participation.
.
B.4. Test of Procedures or Methods to Be Undertaken
A CATI system based on a paper questionnaire will be created. The CATI system will then be used during all interviews to collect data for this pilot registry. CATI systems have several advantages over a paper-and-pencil mode of data collection. First, with a CATI system, survey data is captured electronically, which precludes the need to later key paper data into a database. Theoretically, this reduces transcription errors. CATI systems can also define the type and range of data that can be entered in each field. This can help prevent data entry errors (e.g., entering alphanumeric characters in a social security number field). Finally, a CATI system may also improve the efficiency of an interviewer, resulting in less time being spent writing responses, working through skip patterns in the survey, and a shorter overall interview and respondent burden time.
The CATI data collection instrument will be composed of two parts. The first part will consist of screening questions to determine eligibility for enrollment (Appendix E). The second part—the main questionnaire (Appendix F) will contain contact information of the registrant and other household members, demographics, and information and details of temporary housing unit type, location and amount of time spent living in trailer.
The questionnaire was evaluated for ease in administration and comprehension. Skip patterns were checked and instructions to the interviewers for handling various situations that may arise will be developed. In addition, as the CATI is tested, validity checks will be developed to minimize response error.
Pilot Testing
“Cognitive Interviews” were conducted in February 2011with 9 individuals recruited from an area in which FEMA-supplied temporary housing units were occupied in order to identify any issues related to recall bias; and to determine respondent willingness to provide sensitive information such as social security numbers. RTI conducted 9 cognitive interviews at the Louisiana Public Health Institute (LPHI) in New Orleans. Eight interviews were conducted face-to-face and one was conducted via telephone to simulate a true telephone interview. A cognitive interview protocol was developed to standardize the approach and questions asked during the interview process. This protocol was submitted and approved by RTI’s Institutional Review Board before interviews were conducted. Results from the cognitive interviews will be used to improve the questionnaire, train and monitor the work of interviewers, and to facilitate the interpretation of results.
A timing test of the questionnaire was completed on August 1, 2012. A total of five test interviews were completed in-house by registry staff. The total time needed to answer the main questionnaire ranged from 13 to 18 minutes with an average completion time of 15 minutes. The timing test did not include additional probes. The timing interviews were not video or audio taped.
RTI staff will conduct telephone interviews. Personnel who perform this work will be trained in the purpose of the registry, how to conduct the consent process over the telephone, and how to conduct a telephone interview. Interviewers will also be given training on how to handle difficult situations that may arise during interviews, such as when respondents react emotionally to questions which remind them of their experiences during or after the hurricanes
Prior to data collection, all staff and contractors will be trained on security and confidentiality policies and procedures. Turnover of interviewers is anticipated to be potentially high, given the intense nature of the material. Accordingly, training will need to be developed that can be stand-alone for each interviewer. In addition, from time to time it is anticipated that some interviewers will need to have method refresher training. For these reasons a CD ROM-based training for interviewers will be developed.
Evaluating the Success of the Pilot
Success of the pilot registry will be measured by the following three objectives: 1) Using survey response rates for locating the population of interest (contact rate of >65%) and determining success in enrolling the population of interest (cooperation rate of >70%); 2) verifying the information in the FEMA database using registrant responses for three variables: trailer type, county in which the trailer was located, and city in which the trailer was located describing the survey response rates (verification rate of each variable must be at least 75%) and 3) determining a person time exposure variable (variable must be able to be constructed for at least 75% of registrants). All three objectives much be met in order to consider pursuing a full registry.
Survey Response Rates for locating and enrolling the population of interest
RTI International, the contractor, was provided the FEMA database of 114,292 THU occupants. They then designed the sampling plan to sample 10,000 eligible individuals to be traced/located (Appendix H).
For the calculation of outcome rates for surveys, the standard is the American Association for Public Opinion Research’s (AAPOR) Standard Definitions: Final Disposition of Case Codes and Outcome Rates for Survey (AAPOR, 2008).1 This document provides comprehensive methods for calculating outcome rates for surveys conducted by random-digit dialing (RDD) telephone, for personal interviews in a sample of households, and for mail surveys of specifically named persons. While the Katrina Pilot Registry will not neatly fit into one of these three categories, it can be described primarily as a telephone survey of specifically named persons (a combination of all three types listed above). As such, the AAPOR standards serve as the correct guidelines for the calculation of cooperation and contact rates for the Pilot Registry. The cooperation and contact rates will be evaluated to see the success of the Pilot Registry and determine if the full registry might be feasible to complete.
The components of outcomes rates are:
I = Complete interview
P = Partial interview
R = Refusal and break-off
O = Eligible other non-interview
NC = Eligible Non-contacts
UH = Unknown if household/occupied household
UO = Eligibility unknown, other
E = estimated proportion of cases of unknown eligibility that are eligible
Contact Rate
A contact rate measures the proportion of all cases in which some responsible member of the housing unit was reached by the survey. The rates here are household-level rates. They are based on contact with households, including respondents, rather than contacts with respondents only. Respondent-level contact rates could also be calculated using only contact with and refusals from known respondents.
(I + P) + R + O
CON1 = –––––––––––––––––––––––––––––––––––––––––––
(I + P) + R + O + NC + (UH + UO)
Contact Rate 1 (CON1) assumes that all cases of indeterminate eligibility are actually eligible.
(I + P) + R + O
CON2 = –––––––––––––––––––––––––––––––––––––––––––
(I + P) + R + O + NC + E(UH + UO)
Contact Rate 2 (CON2) includes in the base only the estimated eligible cases among the undetermined cases.
The outcome of interest is whether or not the contact rate is at least 65%.
Cooperation Rate
A cooperation rate is the proportion of all cases interviewed of all eligible units ever contacted. There are both household-level and respondent-level cooperation rates. The rates here are household-level rates. They are based on contact with households, including respondents, rather than contacts with respondents only. Respondent-level cooperation rates could also be calculated using only contacts with and refusals from known respondents.
I
COOP1 = ––––––––––––––––––––––
(I + P) + R + O
Cooperation Rate 1 (COOP1), or the minimum cooperation rate, is the number of complete interviews divided by the number of interviews (complete plus partial) plus the number of non-interviews that involve the identification of and contact with an eligible respondent (refusal and break-off plus other).
The outcome of interest is whether or not the cooperation rate is at least 70%.
Verification Rates for the Katrina Pilot Registry Feasibility Study
The analytic object is to verify the information in the FEMA database using registrant responses for three variables: trailer type, county in which the trailer was located, and city in which the trailer was located. For each of these three variables, we will construct an indicator variable that has a value of one if the database and registrant agree and has a value of zero if they do not agree. For each indicator variable, the mean of the indicator variable will provide the verification rate. The outcome of interest is whether or not the verification rate for each variable is at least 75%.
Determining a Person Time Exposure Variable
Use the exposure questions EX4A – EX9B_2 in the main questionnaire (Appendix F) to construct a person time exposure for each registrant. If a registrant “refuses” or “doesn’t know” the answer to these questions then a person time exposure will not be constructed for that registrant. The outcome of interest is whether or not the person time exposure can be calculated for at least 75% of the registrants.
B.5. Individuals Consulted on Statistical Aspects and Individuals
Collecting and/or Analyzing Data
The Environmental Health Surveillance Branch in ATSDR’s Division of Toxicology and Human Health Sciences, is in charge of constructing the Katrina-Rita Pilot Registry.
1. Data will be collected under contract with guidance from branch
epidemiologists and statisticians. Data will be analyzed in-house by statisticians.
2. Questions regarding this OMB package and data collection procedures
should be addressed to Dr. Vinicius Antao at 770-488-0555, VAntao@cdc.gov.
3. Questions regarding statistical methods should be addressed to Mr. James Sapp at
770-488-3814, JSapp@cdc.gov.
4. Questions regarding IT methods should be addressed to Mr. Timothy
Copeland at 770-488-3696, TCopeland@cdc.gov.
a Hintze, Jerry L. (2011). PASS 2011. Utah: Kaysville.
Citations
Groves R M & McGonagle KA. (2001). A Theory-Guided Interviewer Training Protocol Regarding Survey Participation. Journal of Official Statistics 17, 249-265.
O'Muircheartaigh C & Campanelli P. (1999). A Multilevel Exploration of the Role of Interviewers in Survey Non-Response. Journal of the Royal Statistical Society, 162, 437-446.
De Leeuw, Edith; Callegaro, Mario; Hox, Joop; Korendijk, Elly; Lensvelt-Mulders, Gerty, 2007. The Influence of Advance Letters on Response in Telephone Surveys. Public Opinion Quarterly, Fall 2007, Vol. 71 Issue 3, p413-443
RTI International, New York City Department of Health and Mental Hygiene, and the Agency for Toxic Substances and Disease Registry. World Trade Center Health Registry: Data File User’s Manual. Paul Pulliam, Lisa Thalji, Laura DiGrande, Megan Perrin, Deborah Walker, Melissa Dolan, Suzanne Triplett, Elizabeth Dean, Eric Peele, Robert Brackbill. New York, New York: April 2006.
1 The American Association for Public Opinion Research (2008). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Ann Arbor, Michigan: AAPOR. http://www.aapor.org/default.asp?page=survey_methods/standards_and_best_practices/standard_definitions
File Type | application/msword |
File Title | \NER OMB Renewal |
Author | Aaron Borrelli |
Last Modified By | OS Reviewer |
File Modified | 2013-01-22 |
File Created | 2013-01-22 |