Workplace Health in America
New
Supporting Statement: Part B
Program official/project officer: Jason Lang, MPH, MS
Team Lead, Workplace Health Programs (CDC/NCCDPHP/DPH)
Tel: (770) 488-5597
Fax: (770) 488-5962
Email: jlang@cdc.gov
September 22, 2016
Table of Contents
B. Collection of Information Employing Statistical Methods 3
B.1 Sampling Universe, Sampling Methods, and Expected Response Rates 3
B.2 Procedures for the Collection of Information 9
B.3 Methods to Maximize Response Rates 13
B.4 Tests of Procedures or Methods to Be Undertaken 14
B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 14
Attachments
Attachment A-1. Authorizing Legislation, Public Health Service Act
Attachment A-2. Funding Authority - Patient Protection and Affordable Care Act Prevention and Public Health Fund (P.L. 111-148, Section 4002)
Attachment A-3. Public Health Service Act, Research and Investigations Generally
Attachment B. Federal Register Notice – 60 Day
Attachment C-1. Workplace Health in America Survey
Attachment C-2.* Screen Shots of Workplace Health in America Survey
Attachment C-3. Workplace Health in America Screening and Recruiting Call
Attachment D. Workplace Health in America FAQ
Attachment E. Glossary of Terms
Attachment F. IRB Statement – Implementation Contractor (RTI International)
Attachment G. Workplace Health in America Item Justification Table
Attachment H. Workplace Health in America and Workplace Wellness Programs Crosswalk
* The survey may be completed in one continuous session or multiple sessions. To manage file size, the screen shots are organized in 4 sections (attachment files). The sections do not represent the respondent’s experience.
Attachment C-2-Section 1
Attachment C-2-Section 2
Attachment C-2-Section 3
Attachment C-2-Section 4
The goal for the Centers for Disease Control and Prevention (CDC) Workplace Health in America (WHA) project is to develop and conduct a national survey of employer-based workplace health programs and practices. To provide coverage of a diverse set of workplace health initiatives, the respondent universe includes worksites of various employee sizes, industries and geographic locations in the United States. Worksites with fewer than 10 employees or where employment is unknown will be excluded from the sampling universe. To produce estimates at the national level and for each of the ten CDC regions within a 5 percent margin of error, a representative sample of worksites will be selected with the objective of achieving 7,700 respondents (770 respondents per CDC region). Reliable estimates within a 5 percent margin of error are also desired specifically for hospital worksites. The target number of respondents from hospital industries is 385. The target number of completed core surveys is 8,085, with an estimated 25% of these respondents also completing the optional supplemental survey items (n = 2,021).
Small establishments have traditionally been underrepresented in research about workplace health promotion and the Workplace Health in America information collection effort provides an opportunity to produce findings that are most relevant to the small businesses that account for approximately 90 percent of all workplaces in the nation. Therefore, we estimate the majority of the sample (approximately 89 percent) of the expected participating employer establishments from our stratified sample will be small (< 100 employees), however, participation is voluntary and does not impose an ongoing reporting requirement for any entity.
Our sampling design reflects the composition of worksites in the U.S. and it ensures that we will cover larger employers who employ the majority of workers in the U.S. For our objectives, it is important to learn what the majority of employers are offering, and equally important to learn what the majority of employees have access to.
Only one response will be collected from each worksite. The information collection contractor will call selected worksites to verifiy eligibility criteria. Worksites must employ at least 10 employees and must have been operational for at least 12 months. Since a strong knowledge of the worksite and its benefits and health promotion program(s) is needed to accurately complete the CDC Workplace Health in America Survey, the information collection protocol requires the interviewer to attempt to identify a respondent in one of the following positions:
Wellness directors
Health promotion coordinators
Members of a worksite health promotion committee
Human resource managers
Health benefits managers
Health education staff
Occupational nurses
Medical directors
Building facilities managers
Sampling Methods
The sample of worksites will be selected from the Dun & Bradstreet (D&B) list frame, which includes approximately 2 million worksites that are known to have 10 or more employees in the United States. Project staff selected the D&B frame after identifying it as the least costly data source that had the essential worksite-level information, including industry type and location-specific employment. The frame is updated monthly to ensure that the most current and accurate establishment information possible is used for selecting the sample.
The sample of worksites will be selected using a stratified simple random sample design, where the primary strata are the ten CDC regions plus an additional stratum containing all hospital worksites. The hospital worksites are assigned to their own primary stratum to ensure a sufficient sample size will be obtained. Within each CDC region stratum, the sample will be further stratified by employee worksite size and industry group, where groups are defined by combining North American Industry Classification System (NAICS) sectors. Within the hospital stratum, the sample will be further stratified by CDC region and employee worksite size. Exhibit B1 presents the number of worksites on the frame in each primary design stratum as well as the expected sample size per stratum.
Exhibit B1. Number of worksites on the frame and expected sample sizes by primary design stratum
Design Strata |
Number of Worksites in the Universe |
Desired
Number of Participating |
Expected Worksite Sample Size |
CDC Region 1 |
115,281 |
770 |
2,425 |
CDC Region 2 |
191,886 |
770 |
2,425 |
CDC Region 3 |
222,838 |
770 |
2,425 |
CDC Region 4 |
422,794 |
770 |
2,425 |
CDC Region 5 |
384,453 |
770 |
2,425 |
CDC Region 6 |
263,220 |
770 |
2,425 |
CDC Region 7 |
107,960 |
770 |
2,425 |
CDC Region 8 |
89,615 |
770 |
2,425 |
CDC Region 9 |
311,540 |
770 |
2,425 |
CDC Region 10 |
92,557 |
770 |
2,425 |
Hospitals |
12,322 |
385 |
1,300 |
Total, across all strata |
2,214,466 |
8,085 |
25,550 |
Within each CDC region stratum, the number of worksites to be selected per substratum (i.e., worksite size by industry group) will be determined based on an approximately proportional allocation to ensure representation of worksite characterisitics. In the hospital stratum, the number of worksites selected per substratum (i.e., CDC region by worksite size) will similarly use an approximately proportional allocation scheme. It follows that for substratum i belonging to primary stratum h, the selection probability for the j-th selected worksite is
, (1)
where and are the number of worksites selected and number of worksites in the population belonging to substratum i in primary stratum h, respectively. The associated sampling weight for this step is
. (2)
To minimize both the cost of conducting the WHA survey and the burden to worksites, the number of completed questionnaires will be monitored weekly. If the number of completed responses in a particular substratum is higher than expected relative to the distribution of the sample, consideration will be given to terminating further sampling of worksites in that substratum. This method ensures that the resulting sample of participating worksites within each primary stratum is distributed approximately in proportion to the population distribution, while simulataneously allocating data collection resources to those substrata that have not met their desired sample size.
After the raw data are edited and cleaned, two sets of final analytic weights are constructed: one set of weights for the sample of worksite respondents to the core survey and another set of weights for the sample of worksite respondents to the optional supplemental survey items. A separate set of weights is constructed for the optional supplemental survey items to account for potential differential nonresponse between worksites who respond to both the core and supplemental items and worksites who respond only to the core survey. Each set of final analytic weights is designed to reduce estimate bias and variance due to factors such as nonresponse, early termination of sampling, and the complex sample design, as well as assure that weighted estimates are representative of the target population.
Estimates generated from WHA survey data are computed with analysis weights to reflect the combined effects of the following:
probabilities of worksite selection
early termination of worksite sampling activities because of higher-than-expected
yields
nonresponse
For each set of weights, the final worksite-level analysis weight is computed as the product of a number of weight factors. These factors reflect the probability of selection, as already discussed, as well as appropriate adjustments for early sampling termination and nonresponse.
The starting point for the final analytic weights is the inverse of the probability of worksite selection called the base sampling weights. The base sampling weight accounts for the unequal probabilities with which worksites are selected and is presented in Equation (2), associated with the initial stratified simple random sample of worksites from the D&B frame. The base sampling weight would be the appropriate analysis weight if effects due to such issues as early termination of sampling efforts due to higher than expected yields and nonresponse were negligible; however, weight adjustments likely will improve the accuracy of the estimates. The weight adjustments are implemented in two weighting steps:
Weighting Step 1, by which the base sampling weights are adjusted for early termination of sampling activities due to high-than-expected yields; and
Weighting Step 2, by which the adjusted base weights from Step 1 are further adjusted to account for nonresponse. Weighting Step 2 will be conducted independently for each set of analytic weights to account for differences in the distribution of responding worksites for the core and supplemental survey items.
The weights are calculated separately for each primary stratum (i.e., each CDC region and the hospital stratum for the core survey and at the national level for the supplemental survey items) for all weighting steps. The specific adjustment methods used in each of these weighting steps are described below:
Early termination of sampling activities adjustment
As described above, when a higher-than-expected number of worksite respondents occurs within a particular substratum for a given primary stratum (i.e., CDC region or hospitals), data collection efforts for that substratum may be terminated. When this occurs, the base sampling weights of those worksites whose sampling activities were terminated early will be allocated directly to responding worksites within the same substratum in order to maintain the correct representation of region, size, and industry characteristics in the sample. . The early termination-adjusted weight for the j-th responding worksite in primary stratum h and substratum i is
where represents the adjustment factor for responding worksites in primary stratum h and substratum i ;and represents the design-based sampling weight shown in Equation (2). If sampling is not terminated early in primary stratum h and substratum i then =1.
Nonresponse adjustment
The early termination-adjusted weights are further adjusted for worksite nonresponse with use of a generalized exponential model (GEM). The GEM calibration is a generalization of the well-known weighting class approach, the iterative proportional fitting algorithm that is generally used for poststratification adjustments, Deville and Särndal’s (1992) logit method, and Folsom and Witt’s (1994) constrained logistic and exponential modeling approach. The GEM calibration process causes the weighted distribution of the respondents to match specified distributions simultaneously for all of the variables included in the model. One advantage of the GEM method over simpler weighting class or poststratification adjustments is that the adjustment model can use a larger and more diverse set of control variables because main effects and lower-order interactions can be used in the model, rather than complete cross-classifications. Folsom and Singh (2000) described the GEM method in a paper presented to the American Statistical Association.
To summarize, a set of predictor, or adjustment, variables is specified, together with the control total for each variable to which the weighted sample is expected to match. The GEM method is designed to determine a weight adjustment factor for each respondent, such that for any single predictor value x,
where the summation is over all respondents, is an adjustment variable in the model, is the early termination-adjusted weight, is the adjustment factor, and Tx is the control total for the variable x. Tx may be either a nonresponse adjustment control total estimated by the sum of base sampling weights for both respondents and nonrespondents or an external control total to adjust for under- or overcoverage of the frame. The adjustment factors, , are determined to match the control totals for all of the variables in the model simultaneously. Furthermore, upper and lower bounds on the weight adjustment factors can be set to reduce the influence of observations that otherwise might have received a very large weight adjustment. The upper and lower bounds also reduce the effect of unequal weighting that may result from uncontrolled weight adjustments.
Within each primary stratum for the core survey sample; and at the national level for the sample of the optional supplemental survey items, the worksite weights are adjusted using the GEM method with a model that may contain different combinations of the following variables:1
industry group used for sampling;
worksite size;
headquarters/branch type;
code information using decennial census data for quartile distribution of owner-occupied housing;
urban or rural;
time zone; and
two-way interactions between industry group and worksite size.
Variable selection proceeds by first fitting a model containing only main effects and tightening the upper and lower bounds to minimize any adjustment to the base sampling weight while simultaneously minimizing any increase in the unequal weighting effect (UWE).2 Two-way interactions among the variables are then added to the model. Cells that do not contain any respondents or that are collinear with other cells are removed from the model. If a convergent model cannot be obtained, some covariate levels are collapsed together. Other variables or interactions may be removed from the model until a convergent model is obtained (i.e., a solution is found given all constraints) that maintains as many of the covariates and their two-way interactions as possible.
Provided that data collection is planned to occur over a less than 24-month period, no significant changes in the worksite counts per strata or substrata are expected. Given that the D&B frame comprises the sampling universe and all implemented adjustments are designed to produce adjusted weight totals that match the frame counts of worksites, no additional weighting adjustments for under- or over-coverage (i.e., post-stratification) will be necessary.
Estimates will be produced at the national level, for each CDC region, and for hospital worksites. Additional estimates will be produced for industry groups and worksite size categories where sample sizes are conducive to reliable estimation. The estimates will consist of scale means and percentage estimates. The standard deviation will be available for each item mean as a measure of response variation among respondents.
Rates of item nonresponse will be reviewed before the estimates are produced. If item nonresponse is low (less than 30 percent, per OMB standards), no imputation will be conducted and no value for missing items will be assumed for estimation. In this case, for each item, if respondents do not provide an answer to a particular question, they would be excluded from both the numerator and the denominator of the estimated mean. If higher rates of item nonresponse (greater than 30 percent) are observed, a nonresponse bias analysis will be performed using worksite characteristics available from the frame (e.g., worksite size, industry group). If the nonresponse analysis suggests the potential for bias, imputation may be used to address the item nonresponse. If any imputation is performed, an imputation flag will be used to identify imputed values so data users are aware of the imputation.
Variances will be estimated with the first-order Taylor series approximation of deviations of estimates from their expected values. These design-based variance estimates will be computed with SUDAAN® software (2012). These estimates properly account for the combined effects of clustering, stratification, and unequal weighting, although no clustering effects are expected in the WHA data. These estimated variances are used to estimate both the standard errors associated with the mean or percentage and the CIs. Standard error estimates and 95% CIs are included with all estimates of means and proportions.
An establishment response rate of approximately 35 percent is expected.
Data collection operations will be conducted at the implementation contractor’s Research Operations Center in Raleigh, NC. For the WHA recruiting effort, RTI will hire up to 30 specially trained Business Liaisons (BLs) whose expertise is in contacting and gaining cooperation from business establishments. An onsite project supervisor will oversee the BLs’ daily activities, including the recording and monitoring of a selection of calls to ensure call quality and accuracy.
CDC will offer online, paper and telephone options to respondents. Attachment C-1 represents the instrument completed telephonically or in paper/pencil format, and Attachment C-2 provides screen shots of the web-based survey. The Web-based survey will be accessible on computers, as well as mobile devices. CDC will encourage respondents to complete the online version because it is the most efficient mode and eliminates most possibilities for respondent or data entry errors. The survey is also programmed as a computer-assisted telephone interview to allow BLs to complete the survey over the telephone with respondents who choose this mode. Project staff will enter data collected with the paper version of the survey directly into the Web-based portal.
All participating employers will be given the option to complete the supplemental survey items after they complete the core survey. The online, telephone, and paper-and-pencil versions of the survey instrument will inform participants when they have completed the core survey and will provide the option of continuing with the supplemental items, or concluding their participation at that point.
Data collection activities are described in more detail in the section below and summarized in Exhibit B2.
Exhibit B2. Flowchart of WHA Data Collection Activities
Conduct Business Liaison Training
Shortly after receiving OMB clearance, BLs will participate in an 8-hour training that will include general interview training and project-specific training, both led by project staff. The training program will be designed to address the specific protocols and procedures of the WHA, and will provide BLs with hands-on training with the survey instrument. Each training component will be reinforced with group discussion and interaction, trainer demonstrations, and classroom practice and discussion. In addition to the training program, the implementation contractor will prepare a BL manual for use by data collection staff. This manual will serve as both a training tool and as a procedural guide during data collection. After training, a copy of the manual will be available at each work station for use as a reference during data collection.
Conduct Tracing Activities
Because experience has shown that D&B sample lists can sometimes be incomplete or outdated, centralized tracing activities will be carried out by the implementation contractor’s in-house Tracing Operations Unit (TOPS). This unit comprises a well-trained, professional staff specializing in tracing and locating sample members of all types. The supervisors, team leaders, quality control specialists, and tracing specialists of TOPS are highly skilled at locating hard-to-reach businesses. Interactive tracing will be conducted as needed during the first 4 months of the data collection period.
Make Initial Phone Contacts
BLs will begin screening and recruiting businesses to participate in the WHA data collection effort. The objectives of the initial phone call include:
Verifying the name and address of the establishment.
Identifying the point-of-contact; ideally a wellness program coordinator or human resources representative. (The initial call to the point-of-contact may be a separate call and the BL may be redirected again.)
Describing the purpose and benefits of the survey, the details of participation, and data collection mode options.
Obtaining cooperation, verifying contact information.
Determining the point-of-contact’s preferred mode of data collection.
Because conversational rather than scripted approaches are more effective at establishing rapport with respondents and increasing participation rates, BLs will be provided with “talking points” to guide their interaction with the point-of-contact. BLs will be trained to listen and interact effectively and in a comfortable style. BLs will document all information gathered during each conversation into the WHA case management system. Once participation has been secured over the phone, BLs will send an e-mail to the point of contact, containing a welcome letter with informed consent language, project information, the project helpdesk phone number and e-mail, and their unique user ID to access the Web survey.
Send Reminder Emails
RTI will send reminder e-mails to businesses 7 days after each initial phone call with the point-of-contact. E-mails will be generated from the case management system and sent to non-respondents. The message will cover the talking points of the initial phone contact, including a review of the options for completing the survey, and providing a project phone number and email address for them to use if they have questions.
Conduct Follow-Up Phone Contacts
Follow-up phone contacts will be made to non-responders 14 days after the initial phone call. The purpose of the follow-up calls will be to encourage respondents to submit their data as soon as possible. The BLs will use this opportunity to explore and overcome possible obstacles to workplaces’ participation. For example, if points-of-contact have misplaced the invitation email with the link and log in information, project staff will resend it. Outbound calls and their outcomes will be recorded in the case management system.
Send Final E-mail with Web Link and Attached Paper Questionnaire
Approximately three weeks after the initial call, the case management system will generate a final e-mail to non-responders stressing the importance of the survey and reminding them of their Web survey user ID. This email will also include an electronic version of the paper questionnaire as an attachment, should the point-of-contact decide to complete it in this format to return via e-mail.
Provide Technical Assistance to Sites
We anticipate that the most common assistance needs will involve basic assistance in logging onto the Web survey and confusion about how to respond to particular survey questions. The project contact information, including a toll-free project telephone number and e-mail address, will be provided to the point-of-contact on or following the initial phone call. Project staff will monitor the inquiries to ensure that concerns and questions are addressed.
Monitor Data Collection
Our centralized case management system database will maintain a history of activities for each case in the sample in the form of event and status codes, along with relevant information about each sample member. Our document receipt systems and project data files will interface with the case management system to update the status of each case with little lag time between when the activity is performed and when the case status is updated. RTI will generate weekly response rate reports directly from this database to ensure reporting the most current survey response information. Project staff will routinely review reports and conduct biweekly quality control meetings with the WHA BLs to discuss challenges, identify root causes for nonresponse, and recommend strategies to increase response.
Once OMB clearance is received, RTI will implement the approved WHA data collection plan. Although the WHA sampling strategy was developed for an overall response rate of approximately 35 percent, our data collection approach—which is based on Dillman’s Tailored Design Method for mail and internet surveys (2000)—is designed to optimize response rates. To that end, RTI will use the following proven strategies to maximize response rates and minimize non-response:
Offer alternate modes of survey completion. Our experience has shown that when respondents are provided with flexible, multiple methods for submitting their data, they are more likely to comply than if they are given fewer submission options. The O*NET Data Collection Program, which has been conducted by RTI for 13 years, offers both mail and Web options and yields a cumulative questionnaire return rate of 65%. Although workplaces will be encouraged to complete the survey via the Web option, they will have the option to complete the survey over the telephone with a Business Liaison, or via a paper-and-pencil version that can be mailed to them.
Include an initial telephone contact to identify an appropriate point of contact, establish rapport, and verify contact information. In their review of establishment mail survey response rates, Paxon, Dillman, and Tarnai (1995) found that establishment surveys featuring direct personal contact with respondents typically have higher response rates than surveys using anonymous mailings. Furthermore, one of the principles in Dillman (2000)’s Tailored Design strategies specific to business establishment surveys is to identify the appropriate respondent up-front and develop multiple ways for contacting that person. On this study, Business Liaisons will have the opportunity to identify the most appropriate point-of-contact and establish rapport with them through an unscripted, but structured conversation.
Conduct a series of personalized contacts and correspondence. Dillman (2000) suggests maximizing response rates with four carefully-timed contacts that support one another through their wording and timing. For the WHA survey, RTI will make the following contacts:
Initial call to determine eligibility and recruit participation.
Thank you/reminder e-mail to non-responders 7 days following the initial contact to reiterate the purpose of the study, offer the multiple modes of response, provide the unique login and password information for the Web survey.
Telephone follow-up call to non-responders 14 days after the initial contact call to re-establish rapport, answer questions, address obstacles to participation, and encourage workplaces to complete the survey as soon as possible.
Second reminder e-mail to non-responders 21 days following initial contact that will be the same as the first email reminder, but will also include an electronic version of the paper survey as an attachment.
High quality staff. RTI will use trained Business Liaisons to make contacts with workplaces. Because of the partially unscripted nature of the calls that will be conducted on this survey and the challenges of securing participation from organizations, BL job candidates will be carefully screened and evaluated by RTI project staff. Candidates will be selected on the basis of a track record of successful work experience in contacting and gaining cooperation from business establishments.
CDC developed the Workplace Health in America survey instrument in collaboration with subject matter experts at CDC, NIOSH, RTI International, the University of North Carolina, and several other organizations. The WHA team, including subject matter experts from CDC, the University of North Carolina, and RTI (the implementation contractor) provided input on the content of the survey and the data collection protocol required to meet the goals for the survey. The draft survey underwent cognitive testing with respondents representing worksites, to ensure the survey items were interpreted as intended. The survey design team revised items that cognitive testing participants indicated were unclear or potentially confusing. The design team also embedded definitions of potentially unfamiliar terms within the online survey instrument to help ensure consistent interpretation by all participants.
CDC conducted cognitive testing of the survey items for clarity and understanding with a small number of mid-size and large external employers (n=5). Based on the results of the cognitive testing, we revised some items and defined certain terms within the survey. CDC tested the online survey instrument with a small number of employer representatives for timing (n=3).
CDC developed the Workplace Health in America survey instrument and information collection plan in collaboration with subject matter experts at CDC, NIOSH, RTI International, the University of North Carolina, and a number of subject matter experts in worksite health promotion. CDC also discussed the Workplace Health in America survey content and proposed information collection with a broad variety of colleagues representing the CDC National Center for Chronic Disease Prevention and Health Promotion Workplace Workgroup.
Consultations involving Staff from CDC and NIOSH |
|
Dyann Matson-Koffman Health Scientist Office of the Associate Director for Science CDC/Office of the Director |
Phone: (404) 639-4783 Email: DMatsonKoffman@cdc.gov |
Pamela Allweiss Medical Officer Division of Diabetes Translation CDC/ONDIEH/NCCDPHP |
Phone: (770) 488-1154 Email: Pca8@cdc.gov |
Casey Chosewood Senior Medical Officer for Total Worker Health™ National Institute for Occupational Safety and Health |
Phone: (404) 498-2483 Email: LChosewood@cdc.gov |
Jeannie Nigam Research Psychologist Division of Applied Research and Technology National Institute for Occupational Safety and Health |
Phone: (513) 533-8284 Email: JNigam@cdc.gov |
CDC will provide overall direction for the Workplace Health in America Survey, directing regular planning and coordination meetings with the contractor staff including the data collection plan and the aggregate results and benchmarking data.
The implementation contractor, RTI International, will recruit and collect survey data from the nationally representive sample of worksites. RTI will also analyze and report survey results.
The principal contacts for each organization are listed below:
Staff from CDC |
|
Jason Lang Team Lead, Workplace Health Programs CDC/ONDIEH/NCCDPHP |
Phone: (770) 488-5597 Email: jlang@cdc.gov |
Implementation Contractor |
|
Laurie Cluff Project Director RTI International |
Phone: (919) 541-6514 Email: lcluff@rti.org |
Michael Penne Sampling and Analysis Lead RTI International |
Phone: (919) 541-5988 Email: penne@rti.org |
Sarah Harris Data Collection Lead RTI International |
Phone: (919) 541-7486 Email: harris@rti.org |
Laura Linnan Instrument Development Lead University of North Carolina |
Phone: (919) 843-8044 Email: linnan@email.unc.edu |
Deville, J. C., & Särndal, C. E. (1992). Calibration estimation in survey sampling. Journal of the American Statistical Association, 87(418), 376–382.
Dillman, D.A. (2000). Mail and internet surveys: The tailored design method (2nd ed.). New York: Wiley.
Folsom, R. E., & Singh, A. C. (2000). A generalized exponential model of sampling weight calibration for extreme values, nonresponse and poststratification. In Proceedings of the American Statistical Association, Section on Survey Research Methods (pp. 598–603). Washington, DC: American Statistical Association. Available from https://www.amstat.org/Sections/Srms/Proceedings/
Folsom, R. E., & Witt, M. B. (1994). Testing a new attrition nonresponse adjustment method for SIPP. In Proceedings of the American Statistical Association, Social Statistics Section. Washington, DC: American Statistical Association.
Paxson, M. C., Dillman, D. A., Tarnai. J. (1995). Improving response to Business Mail Surveys, In: B.G. Cox et. Al. (Eds.) Business Survey Methods. New York: Wiley.
RTI International. (2012). SUDAAN language manual, release 11.0.0. Research Triangle Park, NC: RTI International.
1 Empirical evidence from other studies showed that these characteristics had disproportionate response rates within them.
22 The UWE measures the increase in the variance of an estimate—an increase due to unequal weighting above the variance that a sample of the same size would yield if the weights were equal. The UWE is estimated by .
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Loraine Monroe |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |