Supporting Statement B - 0960-NEW [Beyond Benefits Study (BBS)] (Final)

Supporting Statement B - 0960-NEW [Beyond Benefits Study (BBS)] (Final).docx

SSA Beyond Benefits Study (BBS) Data Collection

OMB: 0960-0836

Document [docx]
Download: docx | pdf


SSA Beyond Benefits Study (BBS)

OMB No. 0960-NEW


  1. Collections of Information Employing Statistical Methods


The objective of the BBS is to understand the needs (e.g., service, medical, and employment) of individuals who, due to medical improvement, have “exited” or are likely to “exit” the Social Security Disability Insurance (SSDI) program, the Supplemental Security Income (SSI) program, or both. This study will provide SSA with information to identify potential interventions and policies to help Exiters and Possible Exiters achieve sustainable, substantial work leading to self‑sufficiency. The study aims to answer three primary research questions:


  1. What are the service, medical, and employment needs to achieve sustainable, substantive employment among individuals who exit SSDI/SSI programs?


  1. What types of services, resources, and interventions, that help individuals exiting SSDI/SSI programs become and remain employed, should SSA consider testing in a larger study?


  1. What policy recommendations will facilitate substantive and sustainable employment among individuals who exist SSDI/SSI programs?


The study will help answer these questions by collecting data through: Qualitative in-depth interviews and focus groups; survey responses from a Nationally representative sample; and, a Motivational Interviewing (MI) Pilot.


  1. Statistical Methodology – The Respondent Universe and Sampling Methods

Westat will randomly select participants from the sample file list of Exiters and Possible Exiters that SSA provides. First, we will create a single sampling frame from the master list received from the SSA. From this frame, we take three different samples – one for qualitative data collection, one for survey data collection, and one for the MI pilot. To ensure that no eligible individual participates in more than one type of data collection activity and minimize respondent burden, our statisticians will use Permanent Random Number Sampling. This technique produces a simple random sample without replacement.


The scope of work for the BBS requires a minimum of 4,000 total survey participants. Expecting 20 percent nonresponse attrition, we will recruit an initial sample of 5,000 individuals. Based on our previous experiences, we also suggest allowing a reserve sample of 10,000 individuals to allow for additional attrition. If needed, we will release the reserve sample in blocks of 1,000 individuals at a time to reach the total number of 4,000 completes. Drawing a reserve sample and increasing the pool of potential respondents will help mitigate challenges related to locating respondents due to incorrect contact information and failure of respondents and prospective respondents to answer telephone calls.


Exhibit 1. Sampling statistics for varying interview sample sizes

Sample Size

Population Percent

Design Effect

Standard Error

Half-Width

Coefficient of Variation

95% CI

500

50%

1.1

2.35%

4.60%

4.69%

800

50%

1.1

1.85%

3.63%

3.71%

1,000

50%

1.1

1.66%

3.25%

3.32%

1,333

50%

1.1

1.44%

2.82%

2.87%

1,500

50%

1.1

1.35%

2.65%

2.71%

1,600

50%

1.1

1.31%

2.57%

2.62%

2,000

50%

1.1

1.17%

2.30%

2.35%

We will first stratify the sample with the primary strata of Short-term Exiters (n=1,000), Long-term Exiters (n=1,000) and Possible Exiters (n=2,000). Next, we will oversample those with high-scoring likelihood of medical improvement based on the CDR profiling model. We will ensure that 75 percent of respondents in each group represent “high-scoring” individuals.


Exhibit 1 shows the sampling statistics calculated for different sample sizes. The values for the percent and design effect were set to be “safe” or “conservative” We assume that the population estimates we are interested in calculating is a proportion of 0.5 (50%) because it gives the maximum standard error, and therefore the maximum (most conservative) margin of error, which is the half-width of the 95% confidence interval. The real design effect is usually smaller, but 1.1 was chosen in order to be conservative. The exhibit shows that developed sample sizes will facilitate the statistical power for comparisons across all three groups as well as comparisons between “high-scoring” and “low and medium scoring” populations.



Exhibit 2. Power calculations for comparing the two strata with sample sizes of 1,000

 

Sample Size

Population Percent

Design effect

Standard Error – Null Hypothesis

Critical region 95% CI

Population Percent – Alternative Hypothesis

Standard Error – Alternative Hypothesis

Power

 Null Hypothesis

Group 1

1,000

50%

1.1

1.66%

 

50.00%

1.66%

 

Group 2

1,000

50%

1.1

1.66%

 

56.60%

1.64%

 

Difference between proportions

2.35%

4.60%

6.60%

2.33%

80.46%


Exhibit 2 presents power calculations for comparing two strata with sample sizes of 1,000. The null hypothesis is that the estimates for both groups are the same (both 50%) and the alternative is that they are different (difference of 6.6%). In a similar way as above, we set the design effect to 1.1 and the population estimate to 50% to maximize the standard errors. We will be able to detect a difference of 6.6 percentage points between the two strata with 80 percent power (95% confidence two‑sided test of the null hypothesis of no difference).

We will stratify the sample by program type (SSDI versus SSI), and by recommended determinants of self-sufficiency, such as age group, impairment type, and urban versus rural locality. We will revisit any of these strata and discuss any necessary changes with SSA based on findings from qualitative data collection activities.

The Permanent Random Number (PRN) Sampling procedure prevents overlap between multiple samples taken from a single sampling frame (see for example, Ohlsson, 2011; "Method: Sample Co-ordination", 2014). The selection process begins by assigning a unique random number between 0 and 1 to each unit in the frame. We then arrange the units in increasing order based on their PRN and select a start point between 0 and 1. From that point, we select the first number of units as the first sample. If we have not met the number of units wanted in the sample before reaching the maximum PRN, we continue the sample by returning to the smallest PRN. The starting point for the next sample is the first PRN after the last PRN that belongs to the last selected unit in the previous sample. This technique produces a simple random sample without replacement.

In this case, we want stratified random sampling; therefore, we will sort the sampling frame based on various characteristics, and we will then create multiple sampling strata. Within each one of these sampling strata, we will apply PRN sampling. We will recruit Possible Exiters, Short-term Exiters, and Long-term Exiters using stratified random sampling. This method allows us to interview enough Exiters and Possible Exiters with varying characteristics that may influence barriers and facilitators of return to work after benefit cessation.

SSA is particularly interested in the characteristics and needs of Exiters and Possible Exiters who received a high score on probability of disability benefits cessation resulting from a scheduled full medical review. Therefore, we will oversample Possible Exiters with a high score, relative to those with low and medium scores. We will also stratify the sample by age group, program type (SSDI, SSI, or both), impairment type, and geography (urban and rural). We plan to oversample younger exiters and possible exiters, SSDI recipients, and those living in rural areas, as well as those with psychological or musculoskeletal diagnoses. We show an illustrative example in Exhibit 3.

Exhibit 3. Illustrative example of sampling plan for Exiters and Possible Exiters by strata


Possible Exiters

(n=2,000)

Short-term Exiters

(n=1,000)

Long-term Exiters

(n=1,000)

Sample size

Target Completed interviews

Sample size

Target Completed interviews

Sample size

Target Completed interviews

Age categories







18-29

1,000

800

500

400

500

400

30-39

600

500

313

250

313

250

40-49

600

500

313

250

313

250

50-64

600

200

125

100

125

100

Benefit status







SSI and concurrent

1,000

800

500

400

500

400

SSDI

1,500

1,200

750

600

750

600

Geography







Urban

500

400

250

200

250

200

Rural

2,000

1,600

1,000

800

1,000

800

Primary diagnosis







Psychological (Affective, Schizoaffective, or Anxiety disorders)

813

650

375

300

375

300

Musculoskeletal and back disorders

813

650

250

200

250

200

Other

875

700

625

500

625

500

Likelihood to exit







High

2,000

1,600

--

--

--

--

Medium or low

500

400

--

--

--

--


For each specific qualitative data collection activity, we will create subsamples with approximately three times the number of targeted completes to account for nonresponses and we will randomly sample potential participants from these subsamples.

Exhibit 4 summarizes our recruitment process for each of the other stakeholder focus groups and in-depth interviews. In each case, we will develop the final sampling criteria in collaboration with SSA leadership.

Exhibit 4 Summary of stakeholder sampling


Stakeholder

Sampling notes

1

Motivational Interviewers

  • Participants include professionals trained in MI techniques and with at least two years of experience providing MI to participants with disabilities.

  • Our sample includes up to five interviewers featured in Westat’s proposal, as well as vocational rehabilitation specialists and employment counselors recruited through our partners at CSAVR and NENA.

  • The mix of interviewers will be diverse in terms of experience and geographic location within the US

2

Service Providers

  • Participants will be experienced in working with individuals with disabilities. The group will be from a variety of states and include a variety of service providers.

  • Our sample will include service providers nominated by our partners at CSAVR, NENA, our provider networks from SED and the IPS Learning Community.

3

State Leaders

  • Participants employed by State Vocational Agencies and have experience in directing employment policy specific to individuals with disabilities.

  • Our partners at CSAVR and NENA will help nominate leaders.

  • Selection will include participants from states with different policies, such as states with and without CMS 1115 waivers to pay for employment services and states with varying strategies for blending and braiding exiting VR and other public funds to pay for employment services.

4

Agency Leaders

  • Participants will be leaders and decision makers that work for or with state vocational rehabilitation agencies.

  • Our partners at CSAVR and NENA will help nominate leaders.

  • Agency leaders will include participants from states with different policies and will vary in their agency responsibilities.


Finally, for the MI Pilot, we will create and randomly sample a subsample of 2,000 appropriate individuals, which helps account for nonresponses. This also accounts for the drop-outs mentioned in Supporting Statement A.

T

  1. Procedures for Collecting the Information

Westat requires pre-collection questions for all study participants before participation in focus groups, interviews, or the survey. The pre-collection questions will review the Privacy Act Statement, confirm receipt of study materials, collect verbal consent, collect permission to sent text message reminders and cell phone number, record permission (focus group only), and request a mailing address to send incentive payment. Westat will program pre-collection quesitons in a web survey. Potential participants will be able to complete these questions online themselves or with an interviewer over the telephone.

Survey Procedures

The initial contacts with sampled survey participants will encourage them to go to the study website to complete the web version of the survey questionnaire. We recognize that a respondent’s age and education level may affect whether they use the website. Thus, to reduce bias in recruitment, we will make subsequent contacts by telephone and offer the computer assisted telephone interviewing (CATI) administration mode. This design minimizes some of the barriers that often make survey participation challenging for those with cognitive and physical impairments. It is also the most effective approach for similar population level surveys that Westat uses. The contact protocol for the main survey includes four main steps.

  1. Initial Letter. We will mail all sampled individuals an initial letter inviting them to complete a web survey. These letters will use SSA letterhead so respondents understand that this is a legitimate survey sponsored by SSA. To maximize response rates from this initial contact attempt, the letter will include a $2 cash pre-incentive. Prepaid incentives of this size have been shown to significantly increase response to both web surveys (e.g., Messer & Dillman, 2011) and telephone surveys (Cantor et al., 2008). The meta-analysis by Mercer et al. (2014) found that incentives of this size increased response rates by approximately 11 to 16 percentage points, depending on the mode.

  2. Reminder Postcard. At week 2 following the initial letter, we will nonrespondents a folded postcard to encourage participation. The information in the postcard will mirror the language included in the initial invitation letter. We will use a folded and sealed postcard given it will include the survey URL and the unique password to access the survey.

  3. Reminder Letter. At week 3 following the reminder postcard, we will mail nonrespondents a letter to encourage participation.

  4. Phone Calls to Nonrespondents. Interviewers from Westat’s Telephone Research Center will initiate phone calls to survey nonrespondents at the start of week 3 following the initial letter. If the respondent states they are not interested in the survey, Westat will stop calling the respondent.

Both our CATI interviewers and respondents who complete the survey via the website will access a web version of the survey instrument allowing the study to have a single database that houses all completed surveys. This design will facilitate easy access to both the English and Spanish versions of the instrument for both telephone interviewers and web respondents. The telephone instrument version of the survey will mirror the web version, with some format adaptations appropriate for interviewer administration and the addition of soft warnings to ensure interviewers record answers for each survey question. We will integrate the instrument with the survey management system and the CATI scheduler, described in the next section.

Introductory scripts will display within the CATI instrument, informing the interviewer of exactly what to read to the sampled individuals during the consent process, thus ensuring individuals receive the necessary information about the study purpose and about the voluntary nature of their participation.

All qualitative data collection occurs remotely, via telephone or video call. This poses challenges in terms of building rapport, ensuring access to technology, and creating the conditions necessary for smooth data collection. However, Westat has conducted hundreds of remote interviews and remote focus groups and brings this experience to BBS. We discuss the procedures below, by type of data collection.

Interview Data Collection Procedures

We will conduct interviews virtually via video call. In our experience, it is easier to build rapport with respondents over video call, as the interviewer can respond to nonverbal cues. However, we understand that not all respondents feel comfortable with video calls or have access to video call technology. Therefore, we ask participants whether they would be comfortable conducting the interview via video call. If the participant is willing and has access to a computer, tablet, or other video‑capable internet device, we schedule the interview for a video call using a secure, easy-to-use platform such as Microsoft Teams. If the participant is willing to have the interview via video but lacks access to an internet connection or video capable device, the study team mails them a smartphone with data plan (and internet hot spot, if needed) along with instructions for use. The package includes a paid return envelope for the respondent to mail the phone back after the interview is complete. If the respondent is not comfortable using a videoconferencing platform, we hold the interviews by phone.

For video interviews, we offer the respondent an opportunity before their interview to walk through the process of using the videoconferencing platform. We schedule these walkthroughs either the day before the interview or the day of and explain the process of connecting to the website and the video call features. These walk-throughs are particularly valuable for interviews with Exiters and Possible Exiters.

For both phone and video interviews, the interviewer starts the interview by introducing him or herself and the study. The interviewer explains the purpose of the study and the data collection effort and goes over informed consent materials, including how long the interview will take; the participant’s rights; the risks, and benefits of the study; plans for maintaining participant’s privacy; and how the participant can get more information about the study if desired. The interviewer answers all questions the participant has and emphasizes that the participant can ask questions at any time during the interview, and they can skip any question they do not wish to answer or end the interview at any time without any impact on their current or future SSA benefits.

The interviewer then asks if the participant consents to the interview. If the participant declines, the interviewer ends the interview. If the participant agrees, the interview asks if the participant agrees to have the interview audio recorded. The interviewer explains the purpose of the recording and that we will protect privacy; the interviewer also explains that the interview can still proceed if the participant does not consent to be audio recorded.

If the participant declines to be audio recorded, the interviewer explains that he or she will be taking notes on the interview, and the interviewer moves on to the topics of the interview. If the participant agrees to be audio recorded, the interviewer turns on the recorder and asks the participant, for the recording, whether they agree to participate in the interview and whether they agree to be recorded. The interviewer then moves on to the interview topics. After the last question, the interviewer stops the audio recorder.

After the interviewer asks all their questions, they give the participant another opportunity to ask questions about the study and thanks them for their participation. We send the incentive and send an additional email or letter thanking the participant.

Focus Group Data Collection Procedures

We follow similar procedures for focus group data collection. One key difference is modality. For the focus groups, participants must be willing to be part of a group video call. As with interviews, we offer to send the participant a video-enabled device and hot spot if they lack one, but if they are not willing to participate via video call and agree to the call being recorded, we thank them but do not include them in the focus group. Our experience is that when one or more group members are not on camera while others are, the dynamic between focus group members is strained.

As with interviews conducted via video call, we hold all focus groups on Microsoft Teams, a secure, easy-to-use platform. We also ask all participants to find a quiet, private location from which to connect to the focus group. We offer all participants the opportunity to do a private walk-through of the platform the day before or the day of the focus group with a member of the study team. During the walk-through, the study team member highlights aspects of the platform that are particularly relevant for the focus group, including options for “raising hands” to indicate that a participant wants to talk, as well as the chat box.

We designate one member of the study team as the technological liaison for focus groups. That team member conducts the walk-throughs with participants ahead of the focus group and helps with all last-minute technological support at the focus group interview. The tech liaison, the focus group facilitator, and an assistant to the facilitator will all join the focus group 30 minutes ahead of the scheduled time to help all participants as they join. We encourage participants to join early.

As each participant joins the platform, the facilitator or assistant will ask if they prefer to change their display name before entering the main group with the other focus group members. If so, we will ask them what they want it changed to. If not, we will display first name and last initial only. We use no last names in the focus groups. Once the participant has chosen a display name, the participant joins the main group.

Once the team obtains consent from all participants, the facilitator introduces him or herself and the assistant and the tech liaison. The facilitator introduces the study and data collection effort: the purpose of the focus group, how long it should last, and how we plan to use the information they provide. The facilitator also establishes the rules of the focus group. We instruct participants to keep private all discussion from the focus group. This is particularly important for the focus groups with Exiters and Possible Exiters. We cannot enforce confidentiality, but we can emphasize to participants that we are trusting them to maintain the confidentiality of their fellow participants.

The tech liaison then walks through some of the basic features of the focus group platform, including how to “raise your hand,” how to use the chat box and how to mute oneself. Participants can practice with these features. The tech liaison is available throughout the focus group to address any technological difficulties.

Our approach to moderating the focus group is to take a light hand whenever possible. We aim to foster discussion and to direct the discussion only as needed to ensure that we cover all topics and that each participant gets the chance to contribute. To this end, we may mute other participants when one participant is talking, to minimize interruptions, or break in to ensure that we address questions or comments made through the chat box.

At the conclusion of the focus group, we end the recording, and we thank all participants and answer any remaining questions. We give all participants the contact information for the study if they have any follow-up questions. After the focus group, we send out any incentives and send an additional letter to thank participants.

MI Pilot Recruitment and Screening

Out of a potential list of 2,000 Exiters, Westat will send an introductory letter to 200 potential MI Pilot participants, along with consent information and a one‑to‑two‑page brochure describing the MI pilot. Within 7 days, we will follow-up with a phone call by a Westat recruiter who can address questions and provide further information about the study. The MI Pilot study population consists of 50 Exiters. If someone drops out of MI after 1 session, then Westat selects a replacement individual from the list sample.

MI Pilot Data Collection

The MI Pilot will consist of the following activities: pre and post telephone surveys with Exiters, up to six MI sessions conducted by five Motivational Interviewers; supervision sessions conducted by a Senior Motivational Interviewer/Trainer; independent fidelity assessments; and post Motivational Interviewer focus group.

Pre and Post Surveys with Exiters

Westat Research Assistants will complete the Employment Change Assessment Scale, a 28-item scale to assess how Exiters might feel when thinking about employment or getting a job with Exiters prior to and following the completion of MI appointment.

MI Sessions

Five Motivational interviewers will conduct up to six MI sessions with each of the 50 Exiters, which will take between 45 minutes and 1 hour via Microsoft Teams or telephone. To minimize attrition, we aim to schedule MI appointments no more than 10 days in advance of their appointments, and to send reminder calls and texts (with permission) the day before the appointment, and the day of their appointment. If necessary, we are prepared to send pre-paid phones and hot spots to Exiters who may not have access to a phone or an internet connection. After completing the first MI session, we will send $40 Visa gift cards by either mail. For each subsequent completed session, we will provide $25 Visa gift cards for a total of $165 if Exiters complete all six calls.

During each MI session, Motivational Interviewers will complete the Stages of Change Screener for Seeking Competitive Employment with Exiters at the start of the session assessing Exiters’ motivation for seeking employment or advancing their career. Depending on Exiters’ stage of change, Motivational Interviewers will complete appropriate MI worksheets to guide Exiters through the stages of change related to seeking employment or career advancement. Following each session, Motivational Interviewers will complete the Interviewer Session Log to capture how they conducted the session, the types of referrals they made during the session (i.e., whether they referred the Exiter to IPS Supported Employment or Vocational Rehabilitation), any non-employment needs/barriers they discussed in the session, other referrals, and the Exiter’s progress.

Supervision Sessions and Independent Fidelity Assessments

The Senior Motivational Interviewer will conduct supervision sessions between 2-8 times for each of the five Motivational Interviewers. We will collect progress notes from the supervision sessions to understand barriers and facilitators for providing MI to this study population. Additionally, the Senior Motivational Interviewer will use the Motivational Interviewing Treatment Integrity Coding Manual 4.2.1 (MITI) to measure fidelity. A subject matter expert (i.e., Jon Larson, a Senior Motivational Interviewer) will make fidelity assessments based on MI session recordings.

Westat will also make fidelity assessments based on MI session recordings. Westat will provide the Senior Motivational Interviewer with a random sample of ten percent of the MI session recordings (approximately 20-30 assessments) including two MI sessions from each Motivational Interviewer. According to the MITI, we will score recordings based on the presence or absence of certain behaviors assessed by an observer. Behavior counts capture specific MI skills the counselor uses during the session. Our Senior Motivational Interviewer, Jon Larson, will assign adherent or non-adherent codes to each behavior. Adherent codes include affirmation, seeking collaboration, and emphasizing autonomy; non-adherent codes include the counselor persuading and/or confronting the client, which are not MI skills.

Post Motivational Interviewer focus group

At the end of the MI Pilot, Westat will conduct a one-hour focus group with the five Motivational Interviewers. The goal of this focus group is to gather data on the motivational challenges for obtaining and maintaining employment, career advancement, and working over SGA among the Exiters. We will examine the process of conducting MI and whether the number of sessions and the content may be useful for this population.


  1. Methods to Maximize Response Rates

We will follow established best practices in survey fielding methodologies to increase response rates while keeping the participant burden as low as possible. Previous studies demonstrated the positive effect of respondent reimbursement on survey response rates. In addition, studies have shown the reminders in web-based data collection increase response rates. We offer participants reimbursement for their participation in the study, and we will also send text reminders to encourage their participation throughout the study. Potential survey respondents will receive a $2 pre‑incentive during recruitment. Prepaid incentives of this size have been shown to significantly increase response to both web surveys (e.g., Messer & Dillman, 2011) and telephone surveys (Cantor et al., 2008). The meta-analysis by Mercer et al. (2014) found that incentives of this size increased response rates by approximately 11 to 16 percentage points, depending on the mode. Exiters and Possible Exiters who participate in the survey, in-depth interviews, or focus groups, and Motivational Interviewers who participate in the focus group, will also receive a $40 Visa gift card. Last, participants in the MI Pilot will receive a $40 Visa gift card after their first session, followed by a $25 gift card for each subsequent session, for a total of $165 if they complete all six sessions.


During recruitment for the survey, we will also send a reminder postcard to nonrespondents two weeks after initial contact, and a reminder letter 3 weeks after initial contact.


In an effort to boost response rates further, we will send the initial mailing in English and Spanish to all sampled individuals. We will then send both English and Spanish versions for the reminder postcard and letter.


Our mixed-mode data collection strategy, which is based on past and recent survey experiences with similar populations and uses a modified Dillman method (Dillman, Smyth, & Christian, 2009), is also designed to maximize response rate.


Westat’s proprietary survey delivery system integrates a customized survey software application and management system to accommodate the large volume of respondents and the simultaneous administration of multiple surveys. When we begin the CATI phase of data collection, we will load telephone numbers into the CATI database and make them available to interviewers through the call scheduler. We will make telephone contact attempts on weekdays and weekends in the morning, afternoon, and evening at different times to increase the probability of making contact with respondents.


Westat’s CATI scheduler calling system automatically dials numbers for the data collectors so they do not have to manually dial the number themselves, thereby reducing interviewer dialing time as well as eliminating errors in manual dialing a number. We use this automated dialer only to eliminate a manual dialing process. A live interviewer will always on the line to speak to whomever answers the call.


The scheduler uses an algorithm that ensures cases will be called at the appropriate times, and programmed rules to minimize the number of calls to any given respondent and to reduce nonresponse. For example, Westat divides the week into day and time “slices” through which the system moves each case in a specified pattern of call attempts. If the first call attempt is in the evening and results in no answer, the scheduler can automatically set the next call attempt for another time of day and a different day of the week. Our computerized CATI scheduler will make at least seven attempts to contact each selected respondent (or the maximum number allowed by OGC and OMB).


Westat trains their employees in the best methods to maximize response rates, efficiently work with the sample of participants, and respond to project needs.


  1. Tests of Procedures

Westat conducted a cognitive test of new and revised survey items to assess respondent understanding of the questions, as well as pretest of the survey to confirm respondent burden and assess question flow. We used the results of the tests to refine the survey items and minimize burden.


  1. Statistical Agency Contact for Statistical Information

For further information you can communicate with the following staff members:


Mustafa Karakus, Ph.D., Principal Investigator and Project Director

Telephone: 240-370-4907

Email: mustafakarakus@westat.com


Erika Bonilla, M.S., Project Manager

Telephone: 301-610-4879

Email: erikabonilla@westat.com


Jeffrey Taylor, Ph.D., Lead Statistician

Telephone: 301-212-2174

Email: JeffreyTaylor@westat.com


Marion McCoy, Ph.D., Contracting Officer Technical Representative

The Social Security Administration

Telephone: 240-498-3727

Email: Marion.mccoy@ssa.go


References


Cantor, D., O'Hare, B., and O'Connor, K. (2007). The use of monetary incentives to reduce non-response in random digit dial surveys. In J.M. Lepkowski, C. Tucker, J.M. Brick, E. DeLeeuw, L. Japec, P.J. Lavrakas, M.W. Link, and R.L. Sangster (Eds.), Telephone survey methodology (pp. 471-498). New York, NY: Wiley.


Mercer, A., Caporaso, A., Cantor, D., and Townsend, R. (2015). How much gets you how much? Monetary incentives and response rates in household surveys. Public Opinion Quarterly, 79(1), 105-129. doi: 10.1093/poq/nfu059


Messer, B.L., Dillman, D.A. (Fall 2011). Surveying the General Public over the Internet Using Address-Based Sampling and Mail Contact Procedures, Public Opinion Quarterly, Volume 75, Issue 3, Pages 429–457.  https://doi.org/10.1093/poq/nfr021


Dillman, D. A., Smyth, J. D., and Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ, US: John Wiley & Sons Inc.

16


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleTitle of Information Collection and Form Number(s)
AuthorPhil Masiakowski
File Modified0000-00-00
File Created2024-12-06

© 2025 OMB.report | Privacy Policy