0990-Attitudes_toward_PartB_statistics

0990-Attitudes_toward_PartB_statistics.doc

Attitudes toward Electronic Health Information Exchange and Associated Privacy and Security Aspects

OMB: 0990-0359

Document [doc]
Download: doc | pdf

B. Collections of Information Employing Statistical Methods


1. Respondent universe and sampling methods


B1a. Respondent Universe


The respondent universe for the Attitudes toward Electronic Health Information Exchange and Associated Privacy and Security Aspects is the English speaking, civilian, noninstitutionalized population aged 18 years old and older within the 50 states and the District of Columbia. We propose limiting this version of the survey to English speaking adults because the survey questions are conceptually complex and it is important to better understand this topic area before expanding the study to include non-English speaking adults. The translation of legal and health care concepts such as privacy and security and electronic health information exchanges is a complex undertaking which will require the investment of significant resources. We believe that it is important to gain a better understanding of these complex issues and concepts first through the proposed survey before taking the next steps of refining the survey based on the data collected and translating the survey into other languages.



B1b. Sampling Plan

Based on the latest National Health Interview Survey (NHIS), 80.1% of people in the United States have a cell phone and 14.8% of people only have a cell phone. Based on this we anticipate that 42.7% of the sample will come from cell phone respondents (i.e., cell phone only respondents and those with both a landline and a cell phone responding by cell phone). In all, we will select an initial sample of 10,795 cell phone numbers and 14,620 landline numbers. Table 1-1 illustrates the assumptions used to derive the sample size. Based on these assumptions, our sample will collect data from 2,570 respondents. As with any complex survey, design effects need to be accounted for when calculating an effective sample size. For this study, we will have design effects from individuals having multiple telephone lines, being on multiple frames and from unequal weighting. We anticipate that these design effects will range from 1.2 to 1.49 depending on the frame from which a respondent is identified. Once these design effects are accounted for we project achieving our desired effective sample size of 2,000.



Table 1-1. Assumptions to determine Sample Size of Telephone Numbers

Assumption

Land Line Frame

Cell Phone Frame

Working number rate

0.619

0.686

Resolution rate

0.855

n/a

Identified household rate

0.734

0.951

Initial screening rate

0.331

n/a

Household with 18+ rate

0.98

0.991

Cooperation rate

0.80

n/a

Screening & cooperation rate

n/a

0.157



Those with only a cell phone are two to three times more likely to be under 35 years old (Tucker, Brick, Meekins, and Morganstein, 2004). Furthermore, 56% of those with a landline phone also have a cell phone and there is limited evidence that respondents who have both types of service can be different depending on whether they were reached and interviewed from the landline or cell phone frame (Kennedy, 2007). They are likely differentiated on which phone service they use most (Blumberg and Luke, 2008). This provides motivation for selecting 1) adults with only cell phones and 2) adults who have both cell and landline phones but are selected through their cell phones. To maximize the coverage of the target population, a dual frame design that randomly samples landline and cell phone numbers, will be used. The dual frame design to reduce undercoverage is discussed in more detail in section B.4.

While there will be overlap between the cell phone frame and the landline frame, the gains in coverage achieved by sampling both frames and the efficiency from not excluding any adults with both types of phone service outweigh the resulting design effect due to inclusion of adults with multiple selection probabilities, who can be selected from either frame.

The national sample will be allocated across 16 strata; 8 cell phone frame strata and 8 landline frame strata. The strata groups are type of phone service (i.e., cell phone only, cell phone and landline reached on cell phone, cell phone and landline reached by landline, and landline only) and Census Region. The sample of telephone numbers will be allocated proportionally across Census Region within each phone service group.

B1c. Sampling Frames

For the landline strata, the sampling frame recommended is maintained by Marketing Systems Group (MSG) using its Genesys application. The sampling frame for the landline strata will include all telephone numbers in each state within active 100 number banks for all valid telephone exchanges. Exchanges that are known to be limited to cell phones will be excluded from the landline strata. A 100 number bank is a sequence of 100 consecutive telephone numbers with the same leading eight digits. For the cell phone strata, the sample of cell phone numbers will come from Telecordia’s database of prefixes and 100 number banks, and will be purchased from Survey Sampling International.

B1d. Sample Allocation and Precision

Based on the anticipated response rate, screening rates, and desired number of respondents, RTI proposes taking a random sample of 25,415 telephone numbers to achieve the study’s design goals. This includes a sample of 10,795 cell phone numbers, which cannot be prescreened for known nonworking and business numbers and therefore requires more numbers to be dialed to achieve the same target response rate.

B1e. Responsive Design

Landline and cell phone samples perform differently for many reasons. Examples include the inability to screen cell phone samples for known nonworking and business numbers and the responsibility of the cell phone owner for air time cost. The cost per interview in each sample will be closely monitored and the allocation between landline and cell phone numbers may be adjusted during data collection to achieve a more optimal allocation. Releasing the sample in replicates provides some control during the initial weeks of data collection, if needed.

Males, particularly in the landline frame, tend to respond at a lower rate than females. The proportion of males and females will also be monitored. The instrument will provide the capacity to change selection probability during data collection for males and females in households with adults from both sexes, during data collection. If the percent of respondents who are male drops below 40%, oversampling of males will be reconsidered.

To address nonresponse, a nonresponse protocol will be implemented. Briefly, the nonresponse phase is a protocol implemented at some point during survey recruitment, in order to decrease nonresponse and gain information from sample members who have not yet chosen to respond or participate. Importantly, the point at which the nonresponse protocol is implemented during the requirement process has cost implications. For example, a decision could be made to move a phone number into the nonresponse recruitment protocol after 5 unsuccessful attempts or after 15 unsuccessful recruitment attempts. As described in section B.3.c, this approach is “responsive” to maintain the most effective data collection.

B1f. Response Rates


For the proposed survey, the response rate will be computed based on the American Association of Public Opinion Research (AAPOR) response rate #4 formula (AAPOR, 2008). The AAPOR calculation is a standard developed by researchers and established as a requirement by a leading journal for survey methodology (Public Opinion Quarterly). This particular formula is the most commonly implemented formula that 1) accounts for ineligibility among cases with unknown eligibility; and 2) treats partial interviews (by respondents who have answered all pre-identified essential questions) as interviews.

Higher response rates do not necessarily translate to lower nonresponse bias, as surveys with lower response rates may evoke less nonresponse bias in some estimates than surveys with higher response rates (Groves, 2006).


“The majority of the loss in the total sample is due to the rate of working numbers and screening out ineligibles.  Once a household has been screened and determined to be eligible, we expect the response rate to be 50% which would be consistent with the results of similar RDD surveys. Thus, the final completed data collection is expected to be representative of the adult English speaking population of interest due to 1) methods that ensure a good response rate, and 2) the Responsive Design Methodology (B1e) that addresses any differential nonresponse that may occur during data collection.


2. Information Collection Procedures


B.2. Procedures for the Collection of Information


B2a. Interviewer Training


RTI will hire approximately 28 telephone interviewers. This will require conducting two project trainings. The short data collection period should mitigate the need for additional trainings based on attrition. The goal of the training program is to ensure that the interviewers understand the survey instrument and all project procedures and requirements. Particular attention will be paid to quality control and refusal avoidance. All staff will have a telephone interviewer manual to serve as a training tool and a procedural guide during data collection. Each training session will be scheduled for 4 hours. This schedule ensures plenty of time to cover three important components: study content and procedures, practice, and certification to work on the project.


Throughout data collection, interviewers will be monitored to check the quality of their work and to identify areas needing more training or clarification. Silent audio and video monitoring of interviewers will take place throughout data collection. Approximately 10% of all interviewing time will be observed. Interviewers are scored on their performance during these sessions, which are unknown to the interviewer at the time of administration, and are given written and verbal feedback on their performance. This process allows the identification of any individual interviewer performance issues, as well as larger issues that might affect the data collection. The information obtained is then used as a teaching tool for other interviewers, as appropriate.


B2b. Collection of Survey Data


RTI will use Computer-Assisted Telephone Interviewing (CATI) to conduct telephone interviews for this study. The survey is anticipated to be about 18 minutes in length. Interviewers will call each number in the sample and screen the household for eligibility to participate in the study. For general population sample cases, interviewers will establish that the household has at least one adult aged 18 or older. For households with multiple adults, interviewers will then proceed to randomly select the household member with the most recent birthday for participation. RTI’s CATI-Case Management System will control the delivery of individual cases to interviewers. We will make a maximum of 15 attempts across days of the week and times of the day in an effort to complete an interview at each telephone number.


B2c. Quality Control


There are three main components to ensuring quality control during the data collection period: interviewer monitoring, feedback to interviewers from call center staff, and Quality Circle (QC) meetings. Interviewer monitoring plays a key role in determining whether or not the training modules are effective and if procedures are being followed properly. Call center staff monitor approximately 10% of the interviewing hours, which is the industry standard. Also, it is important for interviewers to receive feedback on their performance throughout the data collection period. All interviewers are aware that they may be monitored at any time. They do not know specifically when they are being monitored. Call center staff give weekly updates to each individual interviewer on several performance measures. Information on interviewer productivity will be included in the weekly reports sent to the Office of National Coordinator for Health Information Technology throughout the data collection period. Finally, project staff will hold weekly QC meetings with interviewers and supervisors to discuss data collection progress and issues.


B2d. Estimation procedure


All estimates will be weighted to account for the stratified dual-frame sample design, and additional post-survey adjustments for coverage and nonresponse. The latest National Health Interview Survey data and reported estimates will be used to adjust selection weights in order to combine the landline and cell phone samples, to inform the relative size of each sampling frame and the demographic composition within each frame. Census estimated totals will be used to adjust the combined sample to U.S. adult population.


The variance of survey estimates will be computed using statistical software designed for survey data analyses (e.g., SAS and SUDAAN). These procedures, such as CROSSTAB in SUDAAN, take into account the complex survey design and unequal weighting, and the option for Taylor Series Linearization for estimating variances of proportions will be used.

3. Methods to Maximize Response Rates


3a. Interviewer Training


Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving interviewer training has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). For this reason, extensive interviewer training is a key aspect of the success of this data collection effort. The following interviewing procedures, all of which have been proven effective and are considered industry standard will be used to maximize response rates:

  1. Interviewers will be briefed on the potential challenges of administering a survey and well-defined conversion procedures will be established.

  2. If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion interviewers are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. At no time will staff pressure or coerce a potential respondent to change their mind.

  3. Should a respondent interrupt an interview for reasons such as needing to tend to a household matter, the respondent will be given two options: (1) the interviewer will reschedule the interview for completion at a later time or (2) they will be given a toll-free number designated specifically for this project, for them to call back and complete their interview at their convenience.

  4. Conversion staff will be able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.

  5. The contractor will establish a toll-free number, dedicated to the project, so potential respondents may call to confirm the study’s legitimacy.


Special attention will be given to scheduling call backs and refusal procedures. Examples include:

  • Detailed definition when a refusal is considered final

  • Monitoring of hang-ups, when they occur during the interview, and finalization of the case once the maximum number of hang-ups allowed are reached

  • Calling will occur during weekdays from 9am to 9pm, Saturdays from 9am to 6pm, and Sundays from noon to 9pm (respondent’s time).

  • Calling will occur across all days of the week and times of the day (up to 9pm). 


3b. Methods to Maximize Coverage

As briefly described in the sampling plan, approximately 15% of adults in the U.S. have a cell phone and do not have a landline in the household (Blumberg and Luke, 2008). This is a substantial percentage and it is growing which necessitates the incorporation of this cell phone-only population, which would be missing from a landline telephone frame. To address this growing under coverage problem, a dual-frame approach will be implemented with RDD samples of landline and cell phone numbers. Gaining cooperation on cell phones can be at least as challenging as landlines; the intensive methods to increase response rates and reduce nonresponse bias described in section B.3a. will be implemented for both landline and cell phone samples.


Despite the dual-frame approach, additional bias may result from the differential likelihood of reaching respondents with both types of telephone service, depending on which service they are being contacted on. If individuals with both types of service are selected only through the landline frame, and adults from the cell phone frame are screened for having only cell phones, a bias may result because people with both types of service tend to mostly use their landlines. To alleviate this potential problem and to increase the efficiency of data collection, adults with both types of service will be interviewed from each frame. Those with both cell phones and landlines who predominantly use their cell phones (and are therefore unlikely to be interviewed on a landline) will be more likely to be interviewed than if such procedures were not followed. The resulting increased complexity in identifying selection probabilities will be addressed through weighting using the individual and household level telephone service questions asked during the screening (See Attachment C).


B3c. Addressing Nonresponse Bias

Our approach to minimizing non-response and to non-response bias has been discussed in previous sections of this document.  In this section we summarize our discussion of these previous topics into our overall strategy for minimizing bias:

  1. Responsive Design.  The first line of defense for avoiding non-response bias is to address the issue before it becomes a problem.  The CATI system will be continuously monitoring demographic data during data collection.  These indicators will include gender, age range, cell phone vs land line, and geographic indicators.  If an indicator shows signs of becoming a possible source of non-response (i.e. response rates approximately 10% lower than expected), then adjustments will be made to improve the response rates within these sample subgroups.  The type of adjustment will vary based on the nature of the problem.  For example, if calls are uniformly lower than expected on the West Coast across all demographic subgroups, then a simple adjustment on the time of the call may be all that is needed.  Our general strategy in the responsive design will always favor adjustments to the data collection (i.e. changes in call patterns, refusal conversion, etc) that improve the response rate.  Oversampling is another method, but will be used as a last resort.  Oversampling can boost the number of completed interviews for a particular subgroup.  However, as discussed earlier, it may not address the source of the bias as well as the other discussed approaches.  It also has the ramification of increasing the unequal weighting effect which can increase variances and lower our overall statistical power.  i.e. it is a corrective measure that carries a statistical penalty that the other approaches do not.

  2. Non-response Analysis.  After data collection and cleaning, the respondent data will be analyzed for patterns of non-response using generalized exponential models (GEM).  The non-response analysis requires data on both the respondents and non-respondents.  Given that the sampling frame contains very limited data on the non-respondents, this analysis will primarily focus on geographic variables, cell phone and land line rates, and the gender of the respondent.  The data will be modeled to see if statistical difference exists between the respondents and non-respondents.  If so, the GEM will then adjust the initial sampling weights (inverses of the probabilities of selection—provided by the vendors) to account for any non-response patterns that are found. 

  3. Post-stratification.  Finally, RTI will use Census Data to correct for any perpetuations in population variables that are deemed important but not on the sampling frame.  Population data is used for control totals and the non-response weights are adjusted to be in-line with these values.  Variables such as race, ethnicity, and urban/rural indicators will likely be used.

By addressing non-response as it occurs during data collection and accounting for any differences in the weights, the effect of bias from non-response will be minimized.


4. Tests of Procedures


RTI will program and test the CATI System prior to the start of data collection. In addition, RTI will monitor data collection closely to ensure that the skip patterns and data collection procedures are operating correctly. We will review the data collected weekly for the first 2 weeks of this 8 week period. Any necessary adjustments to the CATI instrument or survey protocols will be made during these initial weeks of data collection. An OMB Change Request will be submitted if there is an increase in burden.

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


Individuals who have participated in designing the data collection:


Marcus Berzofsky (919) 316-3752 berzofsky@rti.org

Gina Kilpatrick (919) 541-7132 ginak@rti.org

Dawn Thomas-Banks (919)-541-6557 dbanks@rti.org

Linda Dimitropoulos (312) 456-5246 lld@rti.org


The following individuals from RTI International will participate in the collection of data:

Dawn Thomas-Banks (919)-541-6557 dbanks@rti.org

Gina Kilpatrick (919) 541-7132 ginak@rti.org

Linda Dimitropoulos (312) 456-5246 lld@rti.org


The following individuals from RTI International will participate in data analysis:

Scott Scheffler (919) 542-6570 sscheffler@rti.org

Linda Dimitropoulos (312) 456-5246 lld@rti.org

Gina Kilpatrick (919) 541-7132 ginak@rti.org


ATTACHMENTS

Attachment A: Sample Search Terms

Attachment B: Bibliography: Literature Review

Attachment C: Telephone Screening Script with Consent Text

Attachment D: Questionnaire

REFERENCES


American Association for Public Opinion Research (2008). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 5th edition. Lenexa, Kansas: AAPOR.

Deming W E. (1953). On a Probability Mechanism to Attain an Economic Balance between the Resultant Error of Nonresponse and the Bias of Nonresponse. Journal of the American Statistical Association, 48, 743-772.


Fahimi M, Kulp D, & Brick JM. (2008). Bias in List-Assisted 100-Series RDD Sampling. Survey Practice. September 2008.

Fowler Jr FJ & Mangione TW. (1990). Standardized Survey Interviewing. Newbury Park: Sage publications. 


Goldman, J. (1998). Protecting Privacy to improve health care. Health Affairs 17 (6). Nov/Dec 1998.


Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly 70(5): 646-675.


Groves R M, Couper MP, Presser S, Singer E, Tourangeau R, Acosta GP & Nelson L. (2006). Experiments in Producing Nonresponse Bias. Public Opinion Quarterly 70, 720-736.


Groves, RM & Heeringa S. (2006). Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. Journal of the Royal Statistical Society Series A: Statistics in Society 169, 439-457.


Groves, R M & McGonagle, KA. (2001). A Theory-Guided Interviewer Training Protocol Regarding Survey Participation. Journal of Official Statistics 17, 249-265.


Groves, R M, Singer, E & Corning, A.(2000). Leverage-Saliency Theory of Survey Participation - Description and an Illustration. Public Opinion Quarterly, 64, 299-308.


Kennedy C. (2007). Constructing Weights for Landline and Cell Phone RDD Surveys. Paper presented at the Annual Meeting of the American Association for Public Opinion Research, May 17-20, Anaheim, CA.


Kish, L. Survey Sampling. John Wiley and Sons, Inc. New York; 1965.



McCarty C. (2003). Differences in Response Rates Using Most Recent Versus Final Dispositions in Telephone Surveys. Public Opinion Quarterly, 67, 396-406.

O'Muircheartaigh, C. & Campanelli,P. (1999). A Multilevel Exploration of the Role of Interviewers in Survey Non-Response. Journal of the Royal Statistical Society, 162, 437-446.

Peytchev, A., R. Baxter, and L. R. Carley-Baxter (in press). Not All Survey Effort is Equal: Reduction of Nonresponse Bias and Nonresponse Error. Public Opinion Quarterly.

Thornberry O, Massey J. (1998). Trends in United States Telephone Coverage Across Time and Subgroups. In R.M. Groves, P.P. Biemer, L.E. Lyberg, J.T. Massey, W.L. Nicholls, II, & J. Wakesberg (Eds.), Telephone Survey Methodology. New York: Wiley.

Traugott, MW, Groves, RM & Lepkowski, J. (1987). Using Dual Frame Designs to Reduce Nonresponse in Telephone Surveys. Public Opinion Quarterly, 51, 522-539.


Tucker, C, Brick, JM., Meekins, B., & Morganstein, D. (2004). Household Telephone Service and Usage Patterns in the U.S. in 2004. Proceedings of the Section on Survey Research Methods, American Statistical Association, pp. 4528 -4534.


U.S. Bureau of Statistics. http://data.bls.gov/PDQ/servlet/SurveyOutputServlet?request_action=wh&graph_name=CE_cesbref3


U.S. Census. http://www.census.gov/popest/national/national.html


Waksberg, J. (1978). Sampling Methods for Random Digit Dialing. Journal of the American Statistical Association, 73, 40-46.


Yu J & Cooper H. (1983). Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research, 20, 36-44.


File Typeapplication/msword
AuthorDHHS
Last Modified ByDHHS
File Modified2010-07-07
File Created2010-07-07

© 2024 OMB.report | Privacy Policy