The target population Census Barriers, Attitudes, and Motivators Survey II is all civilian, non-institutionalized and over 18 years of age residents (citizens and non-citizens) of the United States. Within the target population, there are key demographic segments that have historically been hard-to-count (HTC): high density areas w/ethnic enclaves; unattached, mobile singles; and areas with high concentration of economically disadvantaged families.
To ensure inclusion in CBAMS II, we will include personal visit interviews among four hard-to-count strata including American Indians, Hispanics, Asians, and the economically disadvantaged households residing in rural areas. Previous research indicates that these populations would be under-represented in a RDD telephone survey. These are the same groups that were interviewed with personal visits in CBAMS I.
We will also include a cell-phone sample. This sample accomplishes two key sampling objectives:
Increases population coverage -- Nearly 25 percent of households nationwide are cell-only (i.e. have no traditional landline residential phone), or do not have a landline telephone (Blumberg & Luke, 2010).
Reaches the unattached/mobile/single segment. Blumberg and Luke (2010) report that the odds of a person being cell-only is higher for the younger, unmarried population as well as the population who rent their homes and live with unrelated roommates.
The sampling plan for CBAMS II is very similar to the original CBAMS. The main difference is the increase in the number of cell phone interviews. This modification is a reflection of continued changes in telecommunications behavior.
At the time of CBAMS I, the Census Bureau was conducting a dress rehearsal for the 2010 Census in two geographic areas: San Joaquin County, CA; and the City of Fayettville, N.C. and 9 surrounding counties (Chatham, Cumberland, Harnett, Hoke, Lee, Montgomery, Moore, Richmond, Scotland). These two areas were excluded from the sampling to avoid public confusion and not overburden these populations. These areas will be included in CBAMS II.
Using the Census Planning Database (CPD) tract-level statistics from Census 2000, we stratify tracts into the following groups:
American Indian Reservations: Census tracts located on American Indian reservations and a high concentration of American Indian population (40% or more).
High Hispanic population density: Census tracts with a high percentage of Hispanic population (60% or more) and linguistic isolation (20% or more).
High Asian population density: Census tracts with a high percentage of Asian population (60% or more) and linguistic isolation (20% or more).
Rural economically-disadvantaged: Rural Census tracts with a high percentage of population living in poverty (30% or more).
Big-market: Census tracts in large media markets as defined as the 10 largest Designated Market Areas (DMA) in terms of television households.
High HTC score: Top 20 percent of tracts in terms of HTC.
Mid HTC score: Tracts in the 20th – 50th percentile HTC.
Low HTC score: Lowest 50 percent of tracts in terms of HTC.
Mid-market: Census tracts in medium-sized media markets as defined by DMAs with 600,000 to 2,000,000 television households.
a. High HTC, b. Mid HTC, c. Low HTC.
Small-market: Census tracts in medium-sized media markets as defined by DMAs with less than 600,000 television households.
a. High HTC, b. Mid HTC, c. Low HTC
In order to understand stratification and populations selected for in-person interviewing, we must distinguish between hard-to-count and hard-to-reach. For example, young black males are historically hard-to-count, but they can be reached in our telephone samples especially in the high HTC score substrata for the big-, mid-, and small-markets. However, the hard-to-count and hard-to-reach groups we intend to capture via in-person interviewing are more geographically isolated and much less likely to be reached by the telephone sample.
We select the sample of addresses in two stages. First, we select a sample of 20 sites from each stratum (1-4) with probability proportional to size (PPS) where the number of households in the tract is the measure of size. Sites are one or more census tracts. Tracts with less than 500 housing units are clustered with a nearby census tract.
A systematic PPS sample of Census tracts (m) is sampled from each stratum with the tracts sorted by state and county FIPS code and Census tract number. The sites selected for CBAMS I are excluded from the area frame for CBAMS II. Sites in Alaska and Hawaii are also excluded from the area frame.
Within each tract, a systematic sample of n addresses is selected, with the addresses sorted by delivery sequence number. An equal number of addresses will be selected from each selected site so that the sample is self-weighting within each stratum.
Addresses will be selected from the USPS Delivery Sequence File. The DSF includes addresses with both single-family style addresses and residential property addresses such as used for apartments, condominiums, and trailer properties. We will not include non-city style addresses (i.e. Post Office Boxes) in the frame.
We will select 100 addresses from each site and the target number of interviews per site is at least 40.
Table 1. Address Sample Plan
Address Sampling – Personal visit |
Sites |
Minimum Interviews |
Expected interviews |
American Indian Reservations |
5 |
200 |
250 |
High-Hispanic population density |
5 |
200 |
250 |
High-Asian population density |
5 |
200 |
250 |
Rural economically-disadvantaged |
5 |
200 |
250 |
|
20 |
800 |
1,000 |
This sample will be a dual-frame of landline and cell phones. Interviewing cell phone respondents is more expensive than landline interviewing. Therefore, we use an optimal allocation that factors the cost per interview into the equation to minimize the variance of survey estimates. This allocation is “optimal” in that no other allocation results in lower variance for the same cost—it is the most statistically efficient allocation. The allocation is based on reaching the optimal number of cell phone-only user respondents (“cell-only”) relative to respondents with a landline. To determine this number, we will use a cell-only percentage of 25 percent (the latest national estimate of cell-only is 24.5%). We also assume a cell-only interview to be five times the cost of a landline interview. Based on these parameters, the optimal allocation is 13 percent cell-only and 87 percent landline (including dual-users—respondents who have both a cell phone and a landline—and landline-only).
Based on our experience, we expect 40-50 percent of all cell interviews to be cell-only respondents and the remainder to be dual-users. This means we will reach many dual-users in the course of interviewing cell-only respondents. In fact, we will need to allocate 30 percent of the interviews to cell phone in order to have 13 percent cell-only.
We will equally allocate sample to strata defined by media market size. Within each stratum, we will oversample geographic areas that were HTC in Census 2000.
Table 2. Telephone Sample Plan
Telephone Sampling |
Interviews |
Big-Market |
700 |
High HTC |
310 |
Mid HTC |
230 |
Low HTC |
160 |
Mid-Market |
700 |
High HTC |
310 |
Mid HTC |
230 |
Low HTC |
160 |
Small-Market |
700 |
High HTC |
310 |
Mid HTC |
230 |
Low HTC |
160 |
Total |
2,100 |
National cell phone sample |
900 |
The landline sample is selected from a stratified, list-assisted frame. To build a list-assisted sampling frame, directory-listed telephone numbers are mapped and assigned to a specific geographic location (such as a census block group, a census tract, or a ZIP code). Telephone lines are not restricted by geographic borders, but are generally associated with finite geographic areas. The mapping results in a many-to-many association between telephone exchanges and geographic boundaries (i.e. many exchanges associated with many geographic areas). The association between geographic area and telephone exchanges is quantified by tallying the number of directory-listed households in each geographic area by exchange combination. The geographic area is assigned to the telephone exchange with the most number of listed telephones (the rule of plurality). After each geographic area has been assigned to an exchange, the exchanges inherit the demographic and socioeconomic characteristics of the geographic areas. These exchange characteristics can be used for targeting geographic areas with certain characteristics, such as HTC scores.
After mapping the telephone exchanges, all possible telephone numbers are then divided into blocks (or banks) of 100 numbers. A 100-block is the series of 100 telephone numbers defined by the last two digits of a 10-digit phone number. For telephone numbers with the first eight digits in common, there are 100 possible combinations of the last two digits (ranging from 00-99)—this is one 100-block. To greatly enhance efficiency (and reduce costs) zero-blocks, or 100 blocks without any directory listed telephone numbers (called zero-blocks), are excluded (or truncated) from the sampling frame. The exclusion of zero-blocks reduces the frame coverage, but considerably increases productivity. The remaining 100-blocks, those with at least one listed residential number (or 1+ blocks), comprise the sampling frame—referred to as a truncated, list-assisted frame since listed telephone numbers help in improving sampling efficiency. All possible telephone numbers, both listed and unlisted, in 1+ blocks are eligible for selection through RDD with equal probability.
A study of the zero-blocks in 1999 found that nationally, only about 3.5 percent of residential numbers are in zero-blocks (Tucker, Lepkowski, & Piekarski, 2002). Two recent studies on the coverage loss of the zero-backs produced conflicting results. Twenty percent of residential numbers are in zero-blocks, (Fahimi, Kulp, & Brick, 2009) while (Boyle, Bucuvalas, Piekarski, & Weiss, 2009) report five percent, nearly unchanged from a decade earlier. A third study presented at the 2010 Conference of the American Association of Public Opinion Polling suggests that 4.3 percent of residential landline numbers are located in zero-blocks. The study was based on an ABS sample conducted by Arbitron (Gentry & Tupek, 2010). Given the current evidence, we recommend maintaining the list-assisted methodology as described, which is considerably more efficient than including zero blocks.
We will select the landline sample using our in-house RDD sampling system (Genesys from MSG, Inc.1).
Cell Phone sample
The cell phone sample will be a national RDD sample of telephone numbers from a frame of known cell phone exchanges. We will purchase the cell phone RDD sample from MSG.
Deduplication
The randomly selected landline and cell phone numbers will be matched against the CBAMS I samples. All matching numbers will be removed from the CBAMS II sample.
One weight will be calculated for each respondent, “case weights” that can be used for combined analysis of the cell phone sample, the landline sample, and the address sample.
For CBAMS II, the address sample is restricted to census tracts (or groups of tracts) that met the criteria for strata 1-4. The landline sample is be a national random digit dial (RDD) sample excluding telephone exchanges primarily associated with tracts in strata 1-4. Together the landline and address sample will represent a national stratified sampling design.2 We will treat the address and landline sample as a dual-frame without overlap. The cell phone sample will be a national RDD sample that overlaps with the combined landline sample and the address sample.
The weighting plan has these steps:
Separately weight the cell phone, landline and address sample based on the inverse of the selection probability.
Adjust the landline and cell phone for three types of nonresponse using ratio adjustments for:
Unresolved telephone status (working or not);
Unknown eligibility (such as when the respondent hangs up before we establish eligibility); and
Interview nonresponse (when the respondent terms out in the middle of the survey).
Combine the landline and address samples:
Adjust each weighted samples to the population totals for each stratum; and
Add the landline and address case.
Combine landline/address sample with the cell phone.
Since the cell phone frame and the combined landline/address frame overlap, we have the following sample groups:
a1: Landline/address respondents without a cell phone;
b1: Landline/address respondents with a cell phone;
b2: Cell phone respondents with a landline; and
c2: Cell phone respondents without a landline.
Each survey has questions to identify group membership. Note that PAPI respondents who report no phone at all will be included with the Landline/ABS respondents without a cell phone (landline only). While these respondents are technically not a member of any of the groups above, they will not be a representative sample of the “no phone” population group due to the limited geographic sampling for the PAPI interviews.
The landline/address sample and the cell sample are independently weighted to benchmarks for the population group they are meant to represent. This is for two reasons:
Dual-users are overrepresented since they are eligible in both samples, and
Differential response rates between dual-users and cell-only respondents in the cell phone sample.
The dual users are classified into cell-mostly, true-dual, or landline-mostly.
The benchmark for the phone groups is the National Health Interview Survey (NHIS), The NHIS is an in-person household survey that collects information about cell phone and landline availability, it provides national estimates of the cell-only population, the landline only population, and the dual user population.
After weighting to NHIS, we have two independent estimates of the dual user groups. To combine the two estimates, we will average the two sets of weights (both are weighted to the population) with a composite weight based on sample size and estimated design effect.
The last step is to poststratify the combined sample and calibrate the weighted data to reflect population distributions based on the 2010 Census. The calibration is a raking adjustment with five dimensions: age×sex, race×Hispanic origin, tenure×household size, age×educational attainment, and Census division.
Major features of CBAMS II data collection protocols appear in Table 3.
Table 3: Procedures for the Collection of CBAMS II Data
|
In-person |
Landline Phone |
Cell Phone |
Sample |
Certain Hard-To-Contact populations |
The entire U.S. including Alaska and Hawaii |
The entire U.S. including Alaska and Hawaii |
Initial contact |
Pre-notification letter to all homes |
Pre-notification letter to listed numbers |
No pre-notification (numbers are unlisted) |
Primary means of contact |
Home visits |
Automatic dialing |
Manual dialing (as required by law) |
If no one is home |
“Sorry we missed you” card |
Voicemail message on some attempts |
Voicemail message on some attempts |
Eligible respondent |
Anyone in a residential housing unit who is 18 or older |
Anyone in a residential housing unit who is 18 or older |
The person who answers the cell phone if he or she is 18 or older even if he or she has a landline at home |
Respondent selection |
A random adult in the home |
A random adult in the home |
The person who answers the cell phone |
Incentive |
Eligible respondent receives $10 |
No incentive |
No incentive |
Our in-person field team will be responsible for interviewer and supervisor recruitment and training, conducting the in-person interviews, quality assurance, and data management. To conduct the in-person interviews for CBAMS II, we will:
Send pre-notification letters,
Train supervisors and interviewers,
Conduct interviews and distribute $10 incentives,
Verify interviews, and
Enter, check, and clean survey data.
Pre-notification letters are an important part of our strategy for achieving high response rates. They increase the perceived legitimacy of the survey, especially for respondents whose homes will be visited. We will print and mail the letters using pre-sorted, first-class postage three to five days before in-person contact begins.
Each interviewer will attend a full-day training seminar. Half of the training session will be devoted to a detailed item-by-item review of the questionnaire and related forms. The other half will focus on sample management, record-keeping, maximizing response rate, and reporting requirements. Interviewers will conduct mock interviews with one another to further increase familiarity with the questionnaire, potential problem areas, and with the mechanics of administering the interview.
The supervisors and interviewers will be trained by ICF Macro’s Assistant Field Managers, members of ICF Macro’s permanent professional staff. The Managers prepare a written interviewer training manual, distributed to interviewers prior to the training session so they may study the material before the training. The manual also serves as a reference guide during fielding.
Supervisors will receive the same interviewer training plus an extra half-day of training on supervision, assignment areas, staffing, record-keeping, and reporting. Supervisors also attend the interviewer training to support the training effort and meet the interviewers they will be supervising.
The target number of interviews for each of the 20 sites is at least 40. In CBAMS I, we averaged over 50 per site—over 1,000 across all sites. As discussed in the sampling section, 100 addresses will be selected per site. Interviews will be conducted in English, Spanish, Vietnamese, and Chinese.Interviewers will be instructed to contact each household in their assignment as early as possible in the field period. Each household will receive up to four contact attempts on varying days of the week (i.e., weekdays, Saturday, and Sunday) and at varying times of day (i.e., morning, early and late afternoon, and early and late evening). Interviewers will record the day, date, time, and result of each contact attempt for that household.
A “Sorry I Missed You” card (see Attachment C) will be left if no one is home. The card provides a brief description of the study and asks the household to contact the interviewer at the number provided.
Initial refusals will be revisited at a different time and day for a second attempt at an interview. As appropriate, a refusal may be reassigned to one of the other interviewers working in the site. After two refusals, the case will be discussed with the interviewer’s supervisor and further contact will suspended pending a decision made by the Field Administrators and the Field Manager.
To increase cooperation, interviewers will give provide a $10 gift to the eligible member of the selected household. Participation in the survey is not required for the gift.
On a weekly basis, interviewers will mail their completed questionnaires to ICF Macro’s secure Burlington, VT facility. Following a quality review, each survey will be checked into the sample management database which houses all addresses assigned in each site.
The checked-in surveys will then be sent to data entry. Each questionnaire will be manually keyed with 100 percent independent verification—that is, each questionnaire and form is keyed twice and discrepancies are flagged for immediate resolution. The data entry program includes real-time logic and consistency checks; independent error-checking programs based on variable relationships to identify data anomalies; and an external review of a subset of records. Data entry specialists enter all data in every questionnaire whether or not it is consistent with skip patterns. During data processing, skip inconsistencies will be cleaned out of the data according to rules established by the Census Bureau and ICF Macro during the planning phase.
Some in-person data may deviate from the skip pattern. We could replace surveys with such errors, but much of the data is often useful, so the research team will establish rules for cleaning in-person data. These rules might include:
Questions that are answered inappropriately based on responses to previous skip questions are coded as missing.
Questions that are inappropriately skipped are coded as “inappropriately skipped”.
Single answer questions with multiple marks are coded as missing.
We will conduct CBAMS II with the help of M. Davis & Company, Inc. (MDAC). MDAC will conduct telephone interviews in Spanish, dedicating a special, bilingual team to phone numbers in locations with high densities of Hispanic residents. To conduct the telephone interviews, we will
Train interviewers,
Send pre-notification letters, and
Conduct interviewing in English and Spanish,
All interviewers receive 16 hours of base training when they join ICF Macro’s interviewing team. This training covers appropriate interviewing manner, consistency of survey delivery, and refusal conversion approaches. In addition, interviewers will be trained by professional project staff to conduct CBAMS II. MDAC interviewers will attend the same project-specific training that ICF Macro interviewers attend. This four-hour training will cover the survey’s content and purpose, specific problem areas observed in CBAMS I, and role-playing and conducting mock interviews will be a significant part of it. Interviewers will be provided with a customized FAQ with responses they can read to respondents who have questions about the survey or who are reluctant to participate. Bureau research team members are welcome to attend the training and to monitor interviews.
We will prepare and send pre-notification letters to all available addresses for the study’s telephone component. These letters can only be sent to listed, landline numbers since these are the only numbers for which addresses are available.
ICF Macro and MDAC will conduct telephone interviews of landline and cell phone users in English or Spanish. Data entry of survey responses occurs in real-time as the survey is administered. While the data are collected by interviewers at multiple locations, the data are stored on a centralized secure server in Burlington, VT.
For CBAMS II, each landline number will be dialed at least 10 times or until its status is resolved (e.g., complete, non-working, etc.). Cell phone numbers will be dialed at least six times or until the status is resolved. Attempts will be spaced across days, including weekends and across times of day.
During data collection, automated quality control processes run nightly to monitor the data collected by the CATI survey instrument and/or the in-person paper survey data entry program. Data from in person and telephone surveys will be combined and weighted according to the procedures outlined in Section B.1 above.
Before delivery of the final dataset, the data are checked by the automatic program that confirms that the skip logic is correct for all telephone and in-person records. The operations manager also reviews the contents of the file and a frequency distribution of all survey questions and computed variables. The final data file will contain a record for every sample element with a variable that shows the final outcome of the survey effort (the disposition).
Table 4 shows our quality control and assurance procedures for all phases of the CBAMS II project.
Table 4: Quality Assurance Processes
Task |
|
|
Telephone Data Collection |
Testing of CATI program |
|
CATI pretest |
|
Advance letters
|
|
Interviewer Training |
|
CATI quality assurance |
|
Preparation of data files |
|
|
In-person Data Collection |
Questionnaire testing |
|
Interviewer training |
|
Advance letters
|
|
Interviewer Monitoring |
|
Interview Verification |
|
Prepare data files |
|
|
Weighting and Data Analysis |
Weighting |
|
Cross tabulations |
|
Custom Data Analysis |
|
Our proposed methods to minimize nonresponse are listed below:
Multi-mode research (landline, cell, and in-person) is necessary to cover all populations.
Nearly 25 percent of adults live in homes with only cell phones, and a further 16 percent live in homes where cell phones are used for most, or all, calls (Blumberg & Luke, 2010). The research includes both landline and cell phone interviewing.
In-person interviewing is the best way to reach hard to count populations, and using interviewers recruited locally establishes rapport and encourages survey response. The research includes in person interviewing in Hispanic, Asian, and American Indian communities and in rural, economically disadvantaged locations.
We incorporated several research elements intended to increase the probability that we will reach key survey populations other than the majority English-speaking culture.
We will interview in English, Spanish, Chinese (Mandarin and Cantonese) and Vietnamese.
We will use a team translation approach to ensure that survey translations are as culturally appropriate as possible.
For in person interviewing, we will recruit interviewers from the target community to enhance rapport.
Prenotification letters in general promote survey response (Edwards, et al., 2009) (De Leeuw, Callegaro, Hox, Korendijk, & Lensvelt-Mulders, 2007), so they are included for both phone and landline data collection.
The specific contents of the letter matter as well.
A recent meta-analysis found that including information about confidentiality increases participation (Edwards, et al., 2009). The prenotification letter emphasizes anonymity.
Response from people who are suspicious of the Census is particularly important. The letters are from “ICF Macro, an independent research firm”.
The first step in promoting response is promoting contact. We have planned for maximizing contact rates:
Absent households selected for in person administration will receive a “Sorry we missed you” card
We will leave voicemail messages on the first and fourth non-contacts on the phone.
Each in-person household will receive up to four contact attempts on varying days of the week (i.e., weekdays, Saturday, and Sunday) and at varying times of day (i.e., morning, early and late afternoon, and early and late evening).
Each landline number will be dialed at least 10 times or until its status is resolved (e.g., complete, non-working, etc.); cell phone numbers will be dialed at least six times or until the status is resolved. Attempts will be spaced across days, including weekends and across times of day
Another important part of promoting response is averting refusals once contact is made.
In person households will receive a second visit after an initial refusal
Landline sample will receive up to two refusal conversion attempts by specially trained interviewers
Cell sample will receive one refusal conversion attempt by s specially trained interviewer
All interviewers will have a customized FAQ with specific responses they can make to concerned or suspicious respondents.
A final step in promoting survey response after making contact and eliciting cooperation is preventing survey breakoff. We will minimize breakoff by:
Maintaining reasonable length,
Avoiding intrusive or “quiz” questions at the beginning of the survey, and
Ensuring that every survey question is necessary for a stated research goal (to reduce respondent burden)
For CBAMS II, maximizing response rates is one part of a plan to minimize the impact of nonresponse on the data. However, response rates are not always good indicators of data quality. In addition to maximizing survey and item response through the methods above, we will weight the final survey data according to the plan in Section B.1 to ensure that they are representative of the US population.
We will evaluate response to the RDD sample as it relates to socioeconomic and demographic environmental variables. The environmental variables will include tract information concerning race/ethnicity, educational status, urbanicity, tenure, and other related neighborhood descriptors. In addition, we will evaluate nonresponse based on the Census Bureau’s HTC score (for Census 2000). The data for this analysis will come from the 2010 Census planning database.
The questionnaire was cognitively tested with 8 respondents over the telephone. The sampling plan for the cognitive interviews called for interviews to be conducted with: at least four males; at least one census mail non-respondent; and at least 2 respondents who were not white. Interviewing took place from October 28, 2010 through November 5, 2010. All cognitive interviews were conducted by phone except the final one, which was conducted in person. Respondents were mailed a $50 honorarium as thanks for their participation. The cognitive interviewing report is attached.
As cell phones become ubiquitous and more and more people screen their calls, protocol elements such as voicemail messages and caller ID become more important components of achieving survey contact. Research on the impact of voicemail on response is mixed, and largely concerned exclusively with landlines (see (Holbrook, Krosnick, & Pfent, 2008) for a review). For CBAMS II, we will test different message content to see whether what we say influences the probability of survey contact. We will use appeals that have been shown to be major drivers of survey response when they are used in precontact: a promise of anonymity and a message designed to make the survey relevant and important (Edwards, et al., 2009). In addition, we may manipulate the number or schedule of messages to see if when we leave messages influences response.
Randy Zuwallack, MS
ICF Macro
802-264-3724
Frederica Conrey, PhD
ICF Macro
802-264-3785
Mike A Lotti
Accretive Insights
(m) 585.734.1216
Peter V. Miller, PhD
Department of Communication Studies
Northwestern University
Past President, American Association for Public Opinion Research
847 491 5835
Bates, N., Conrey, F., Zuwallack, R., Billia, D., Harris, V., Jacobsen, L., and White, T. (2009). “Messaging to America: Census Barriers, Attitudes and Motivators Survey Results.” 2010 Census Integrated Communication Research Memorandum Series: No. 10. U.S. Census Bureau. May 12, 2009.
Blumberg, S., & Luke, J. (2010, May). Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December 2008. Retrieved August 30, 2010, from National Center for Health Statistics: http://www.cdc.gov/nchs/nhis.htm
Blumberg, S., & Luke, J. (2010, May 12). Wireless Substitution: Early Release of Estimates From the National Health Interview Survey, July-December 2009. Retrieved August 1, 2010, from Centers for Disease Control and Prevention, Publications and Information Products: http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201005.htm
Boyle, J., Bucuvalas, M., Piekarski, L., & Weiss, A. (2009). Zero Banks: Coverage Error and Bias in RDD Samples Based on Hundred Banks with Listed Numbers. Public Opinion Quarterly , 673: 729-750.
De Leeuw, E., Callegaro, M., Hox, J., Korendijk, E., & Lensvelt-Mulders, G. (2007). The influence of advance letters on response in telephone surveys: A meta-analysis. Public Opinion Quarterly , 413-433.
Edwards, P., Roberts, I., MJ, C., DiGuiseppi, C., Wentz, R., Kwan, I., et al. (2009). Methods to increase response to postal and electronic questionnaires (Review). In T. C. Collaboration, The Cochrane Library. JohnWiley & Sons, Lt.
Fahimi, M., Kulp, D., & Brick, J. (2009). A Reassessment of List-Assisted RDD Methodology. Public Opinion Quarterly , 73: 751-760.
Federal Communications Commission. (2010, May 5). 14th Mobile Wireless Competition Report. Retrieved November 1, 2010, from http://wireless.fcc.gov/index.htm?job=reports
Gentry, R., & Tupek, A. (2010). How Much Coverage Does an RDD Frame Really Provide? AAPOR. Chicago, IL.
Holbrook, A., Krosnick, J., & Pfent, A. (2008). The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms. In C. T. James M. Lepkowski, Advances in Telephone Survey Methodology (pp. 488-528). John Wiley & Sons, Inc.
Orme, Bryan and Johnson, Rich. (2009) Typing Tools that Work: MaxDiff scaling puts new respondents into existing segments. Marketing Research Magazine, Summer 2009.
Tucker, C., Lepkowski, J., & Piekarski, L. (2002). The Current Efficiency of List-Assisted Telephone Sampling Designs. Public Opinion Quarterly , 66: 321-338.
1 The Genesys frame is updated quarterly using the Bell Communications Research (BELLCORE) valid area code-exchange database and keyed residential and business listings from major providers.
2 We will excluded the census tracts in strata 1-4 for developing the RDD frame for strata 5-7. Since exchange to geography associations are not exact (i.e. many tracts may be associated with many telephone exchanges), it is possible that some telephone numbers selected in the RDD frame could reach households that are located in a census tract assigned to strata 1-4. In CBAMS I, this happened less than 2 percent of the time.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Riki Conrey |
File Modified | 0000-00-00 |
File Created | 2021-02-01 |