CMS_10437.SupportingStatement_B

CMS_10437.SupportingStatement_B.docx

Generic Social Marketing & Consumer Testing Research (CMS-10437)

OMB: 0938-1247

Document [docx]
Download: docx | pdf

Supporting Statement Part B


Generic Social Marketing & Consumer Testing Research CMS-10437


STATISTICAL METHODS

Data collection methods and procedures will vary; however, the primary purpose of these collections will be for internal management and communications development purposes; there are no plans to publish or otherwise release this information as official agency documents.


  1. Universe and Respondent Selection

The activities under this clearance involve a combination of qualitative and quantitative approaches. In most cases they will involve samples of self-selected customers, as well as convenience samples, and quota samples, with respondents selected either to cover a broad range of consumers or to include specific characteristics related to certain products or services. In particular, for small sample qualitative studies and qualitative surveys using non-probability samples, limitations regarding the ability to generalize from the results will be noted. Such results will not be used to make statements representative of the universe of

study, to produce formal statistical descriptions, or to generalize the data beyond the scope of the sample. The specific sample planned for each individual collection and the method for soliciting participation will be described fully in each collection request.


The methods used in this work are typical of the tools used by program managers to develop, change or improve programs, products, or services. However, this data will not be used to make programmatic decisions. The accuracy, reliability, and applicability of the results of these methods are adequate for their purpose (see, e.g., Patton, 2011). The samples associated with this collection are not subjected to the same scrutiny as scientifically drawn samples where official Agency point estimates are published or otherwise released to the public.



  1. Procedures for Collecting Information

Specific questions for inclusion in any study would be drawn from the approved Item Bank. Data collection methods and procedures will vary and the specifics of these will be provided with each collection request. The Agency expects to use a variety of methodologies for these collections. For example, the Agency or its contractors may use commercial survey-specific software to automate its collection and analysis of feedback. In addition to physical copies, information collection instruments may be electronically disseminated and/or posted on target pages of the Agency’s web site. Telephone scripts, personal interviews, and focus groups with professional guidance and moderation will be used and collected via online data collection techniques if necessary. These materials will be shared with OMB for approval with each collection request.

When more precise qualitative information is called for, we will specify the target population and the sampling frames to be used. In general, such work would also specify an acceptable margin of sampling error and the criterion confidence level desired following standard survey methods (e.g., Groves et al., 2009).



  1. Methods to Maximize Response and Non-Response Analysis Plan


Information collected under this generic clearance is not designed to yield generalizable quantitative findings; but procedures to maximize consumer response will be employed to maximize response so that an appropriately diverse set of participants is available for any study. For example, for telephone surveys CMS contractors would typically use a computer assisted telephone interviewing (CATI) mode of data collection. For both qualitative and quantitative studies, interviewers will be trained to communicate effectively with diverse audiences and alleviate any concerns respondents may have regarding participation in the study and their CMS program benefits. Interviewers will be available during a wide range of times and will attempt to contact potential respondents at a time that is convenient. A toll- free number will be available to respondents so that they can get answers to any study-related questions.

In cases where more precise quantitative information is desired, standard survey approaches for monitoring response rates and conversion of non-respondents will be implemented.

Attention will be given throughout the survey design process to minimize non-response (e.g., as suggested, e.g., by Halbesleben & Whitman (2012).

Survey response rates express completed interviews as a percentage of the estimated eligible units, but note that it can be decomposed into three rates if we assume that the working residential and eligible rate of unresolved cases is equal to that of resolved cases:

Response rate = Working residential number resolution rate * Household screening completion rate * Survey interview completion rate.

This rate is the so-called CASRO (Council of American Survey Research Organizations) response rate, or American Association for Public Opinion Research’s (AAPOR’s) third response rate definition with the working residential and eligible rate of unresolved cases equal to that of resolved cases. We use this definition of response rates here.

Even though surveys envisioned in this package focus on communication and marketing issues and are not intended to provide official government statistics, we note that our approaches are based on well-established methods for telephone RDD sampling and data collection. Established operational protocols are in effect that have been shown to minimize

sampling and measurement errors in the survey process. For example, we will use a dual frame RDD approach to address coverage issues that have often caused problems in representativeness of telephone surveys (see, e.g., Blumberg et al., 2010). In addition, we will apply responsive design approaches by suing survey paradata and related information (e.g., contact patterns, length of interview, number and mode of respondent contact) to manage the survey operations, gain efficiencies, and enhance response rates. We typically monitor real-time information on sample outcomes (e.g., rates of nonresidential telephone numbers, disconnected lines, refusals, completed interviews) to track progress and can make adjustments to enhance survey operational processes. For example, landline and cellular phone samples often have different performance characteristics due to differential rates of nonworking and business numbers. Ongoing monitoring can suggest changes in allocation between landline and cellular calling to optimize data collection efficiency and permit adaptive responses to potential issues with respondent selection, refusal conversion strategies, and related issues. We have also included items in our item inventory that will allow us to address passive non-response (e.g., as related to interest in survey topics).

The methods described above have been shown to yield response rates of at least 20 percent with US consumers and business leaders when the survey is of reasonable length and on a salient, non-threatening topic, as is the case in the present work. This rate is consistent with rates in typical health policy and marketing research survey studies and can be used to establish reasonably representative samples. The following procedures, for consumers, will also encourage response:

  • At least three callbacks at various times, so every case will have a day, night and weekend attempt. We will also be doing refusal conversion attempts.

  • Interviewer training will review refusal avoidance and second calls to dead dispositions.

  • A toll-free number is available at Market Strategies International to answer respondents’ questions.

  • Calls to the toll-free number will be returned to address respondents’ concerns.


Despite the best efforts of the marketing research industry and the survey research community, there is clear evidence of declining response rates in both telephone and face-to- face surveys (see, e.g., NORC, 2007; Peytchev et al., 2009; Kennedy et al., 2019). Unit nonresponse is a source of particular concern because is often regarded as a boundary condition for nonresponse bias that can limit the utility of survey results for actionable guidance. Although a recent meta- analysis of nonresponse issues in survey research has reinforced the finding that response

rate is not generally predictive of nonresponse bias1, there is no doubt that steps taken to assess and limit such biases can result in surveys of higher quality. Unit non-response has two negative consequences for the quality of the estimates derived from the data. First, nonresponse reduces the sample size and, as the number of responses decreases, the variability of survey estimates increases. Second, and more importantly, nonresponse has the potential to cause bias in the estimates. For means and proportions, the bias depends on two factors: the response rate, and the difference in the means or proportions of the respondents and non-respondents. Therefore, bias can be expressed as follows:

Bias = (1 – RR) * (S_r S_n),


where RR = the unit response rate, S_r = the mean or proportion for respondents, and S_n = the mean of proportion for non-respondents.

Thus, bias increases as the difference in means/proportions increases between respondents and non-respondents, or as the unit nonresponse rate increases. Unfortunately, while the response rate can be calculated, we do not know the mean or proportion for the non- respondents. The best strategies for combating unit non-response bias on a CATI survey, like the ones proposed for consumers in this work, are multiple re-contact attempts for non- responders (as noted above) and a robust non-response weighting scheme. Both will be considered in this work.

The potential detrimental effect of unit nonresponse can be reduced through the use of population-based weighting that adjusts not only for under-coverage but also for non- response. This weighting approach controls the weighted sample counts to population totals for characteristics presumed to be correlated with non-response, under-coverage, and/or the survey variables of interest. Analyses for the total population as well as population subgroups based on the resultant survey weights should thus produce accurate and reliable results.

The surveys described here will make use of Census-based population totals for race/ethnicity, gender, age, income, and geography in deriving the survey weights. We expect that these geographic and demographic groups would be most appropriate for ensuring sample representativeness of the population, thereby reducing the potential for bias in the resultant survey estimates.

In order to assess the above weighting scheme and potential non-response bias, we will be comparing demographic profiles and income distribution derived from our data against


1 Groves RM & Peytcheva E (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72(2), 167-189.

several sources, including those published by the Census Bureau for the Current Population Survey and/or the American Community Survey.

For purposes of estimation, cross-sectional weights will be developed that account for the probability of selection from each sample frame, the eligibility rate within each sample frame, levels of non-response within each sample frame, and finally differential nonresponse by age, gender, and geographic region. Standard errors will be produced using software packages such as SPSS/PASW/Stata complex survey statistics to appropriately account for the survey design (see, e.g., Heeringa et al., 2010).


  1. Testing of Procedures

Pretesting may be done with internal staffs, a limited number of external colleagues, and/or customers who are familiar with the programs and products. If the number of pretest respondents exceeds nine members of the public, the Agency will submit the pretest instruments for review under this generic clearance.



  1. Contacts for Statistical Aspects and Data Collection

Each program will obtain information from statisticians in the development, design, conduct, and analysis of customer/partner service surveys, when appropriate. This statistical expertise will be available from agency statisticians or from contractors and the Agency will include the names and contact information of persons consulted in the specific information collection requests submitted under this generic clearance.


Please contact either of the following CMS contacts regarding the statistical and methodological aspects of the design or for agency information:

Hemalgiri Gosai

Social Science Research Analyst, Division of Research Center for Medicare & Medicaid Services

7500 Security Blvd. Baltimore,

MD 21244-1850 (410)786-0000Or


Clarese Astrin

Director, Division of Research

Centers for Medicare & Medicaid Services 7500 Security Blvd.

Baltimore, MD 21244-1850

(410) 786-5424

clarese.astrin@cms.hhs.gov



REFERENCES

The American Association for Public Opinion Research (AAPOR) (2011). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 7th edition. Lenexa, Kansas: AAPOR.

Blumberg, Stephen J. & Luke, Julian V. (2010). Wireless Substitution: Early Release of Estimates from the National Health Interview Survey, July–December 2009http://www.cdc.gov/nchs/data/nhis/earlyrelease/wireless201005.pdf.

Curtin, Richard, Presser, Stanley & Singer, Eleanor. (2005). Changes in telephone survey non- response over the past quarter century. Public Opinion Quarterly, 69: 87-98.

Groves RM, Fowler, Jr. FJ, Couper, MP, Lepkowski JM, Singer, E & Tourangeau R. (2009). Survey Methodology, 2nd Edition. NY: Wiley.

Groves, Robert M. & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72(2), 167-189.

Halbesleben JR & Whitman MV (2013). Evaluating survey quality in health services research: A decision framework for assessing nonresponse bias. (2013). Health Services Research, 48(3): 913-930. Epub 2012 October 10.

Heeringa, Steven G., West, Brady T., & Berglund, Patricia A. (2010). Applied Survey Data Analysis. Boca Raton, FL: CRC Press.

Lepkowski, James M. (1988). “Telephone Sampling Methods in the United States,” pp. 73–98 in R.

M. Groves et al. (eds.), Telephone Survey Methodology. New York: Wiley.

NORC. (2007) National Immunization Survey: A user’s guide for the 2006 Public Use Data File.

Patton, MQ (2011). Developmental Evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.

Pew Research Center (2019). Response rates in telephone surveys have resumed their decline.

Available at https://www.pewresearch.org/short-reads/2019/02/27/response-rates-in- telephone-surveys-have-resumed-their-decline/ (Accessed 3/27/2024)

Peytchev, Andy, Baxter, Rodney K., Carley-Baxter, Lisa R. (2009). Not all survey effort is equal: Reduction of nonresponse bias and nonresponse error. Public Opinion Quarterly, 73(4): 785-806

The Pew Research Center for the People & the Press (2012). Digital Differences. Available at: http://www.pewinternet.org/Reports/2012/Digital-differences.aspx (Accessed

5/10/2013).

Shape1

1


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement - Part B: Generic Social Marketing & Consumer Testing Research, OMB Clearance CMS-10437
SubjectOMB Clearance CMS-10437 - Supporting Statement - Generic Social Marketing & Consumer Testing Research
AuthorCMS, OC
File Modified0000-00-00
File Created2024-09-06

© 2024 OMB.report | Privacy Policy