Pathways for Advancing Careers and Education (PACE) – Third Follow-Up Data Collection
OMB Information Collection Request
OMB No. 0970-0397
Supporting Statement
Part B
Submitted by:
Nicole Constance
Office
of Planning, Research
and Evaluation
Administration
for Children
and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Table of Contents
B.1 Respondent Universe and Sampling Methods 1
B.1.1 PACE Programs and Study Participants 1
B.1.4 Estimation Procedures for PACE Analyses 6
B.1.5 Degree of Accuracy Required 6
B.2 Procedures for Collection of Information 8
B.2.1 Preparation for the Interviews 8
B.2.2 In-Person Interviewing 9
B.2.3 Procedures with Special Populations 10
B.3 Methods to Maximize Response Rates and Deal with Nonresponse 10
B.3.1 Participant Contact Updates and Locating 10
B.3.2 Tokens of Appreciation 12
B.3.3 Sample Control Procedures to Maximize the Response Rate 13
B.3.4 Nonresponse Bias Analysis and Nonresponse Weighting Adjustment 14
B.5 Individuals Consulted on Statistical Aspects of the Design 15
Appendices
Appendix A: Career Pathways Framework
Appendix B: PACE Program Summaries
Appendix C: PACE 72-Month Follow-up Survey
Appendix D: PACE 72-Month Follow-up Survey – Sources
Appendix E: PACE Federal Register Notice
Appendix F: PACE Previously Approved Participant Contact Update Form
Appendix G: Contact Update Letters
G1: PACE Previously Approved Contact Update Letter for 36-Month Survey
G2: PACE 72-Month Follow-up Contact Update Letter
Appendix H: PACE Previously Approved Informed Consent Form
Appendix I: PACE 72-Month Participant Newsletter
Appendix J: Survey Flyer
J1: PACE Previously Approved Flyer for 36-Month Survey
J2: PACE 72-Month Flyer
Appendix K: Email Text
K1: PACE Previously Approved Email Text
K2: PACE 72-Month Email Text
Appendix L: Survey Advance Letters
L1: PACE Previously Approved 36-Month Survey Advance letter
L2: PACE 72-Month Advance Letter
Appendix M: Previously Approved PACE 36-month Follow-up Survey
Appendix N: Contact Update Call Script
Appendix O: Comments from Federal Register Notice and ACF Response
O1: Comments from Federal Register Notice
O2: ACF Response to Comments
This document presents Part B of the Supporting Statement for the 72-month follow-up data collection activities that are part of the Pathways for Advancing Careers and Education (PACE) evaluation sponsored by the Office of Planning, Research and Evaluation in the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services (HHS).
For the 72-month follow-up data collection, the respondent universe for the PACE evaluation includes a sample of PACE study participants.
The PACE study recruited programs that had innovative career pathway programs in place and could implement random assignment tests of these programs. Program selection began with conversations between key stakeholders and the PACE research team. Each program selected into PACE satisfied criteria in three categories:
Programmatic criteria which fit the career pathways framework and included assessments, basic skills and occupational instruction, support-related services, and employment connections;
Technical criteria that emphasized the statistical requirements of the evaluation design, such as programs with the capacity to serve a minimum of 500 participants and to recruit a minimum of 1,000 eligible applicants over a two-year enrollment period; and
Research capacity criteria that addressed the site’s ability to implement an experimental evaluation.
Additionally, ACF required that three of the programs be Health Profession Opportunity Grant (HPOG) recipients.1
The nine selected programs all promoted completion of certificates and degrees in occupations in high demand and, to this end, incorporated multiple steps on the career ladder, with college credit or articulation agreements available for completers of the lower rungs (see Appendix A for a depiction of the career pathways theory of change). While varying in specific strategies and target populations, the nine programs all provided some level of the core career pathways services (assessment, instruction, supports and employment connections) although the emphasis placed on each service varied by program. Appendix B provides summaries of the nine PACE programs.
The PACE program selection process spanned a period of over two years and included detailed assessments of more than 200 potential programs. After winnowing down the list of prospective programs based on a combination of factors, such as the intervention, its goals, the primary program components, program eligibility criteria, and the number of participants that enroll in the program annually, the PACE team recruited nine promising career pathways programs into the study.
The universe of study participants was low-income adults (age 18 or older) who were interested in occupational skills training and who resided in the geographical areas where PACE sites were located. The sample size in eight of the nine sites ranges from 500 to 1,220, with most near 1,000—equally distributed between the treatment group—which receives the intervention offered by the program—and the control group—which does not receive the intervention offered by the program. Assuming that all characteristics are the same across the treatment and control groups—with the only difference being participation in the program—the control group shows what participant outcomes would be like if the program was not available. The ninth site has a sample of 2,544 individuals across eight sub-sites, with 1,669 in the treatment group and 875 in the control group.
PACE study participants will be interviewed using the data collection instrument for which this OMB package requests approval (the 72-month follow-up survey). The respondent universe for the 72-month survey is the universe of study participants in both the treatment and control groups at a subsample of the nine PACE sites. OPRE does not have an adequate budget to conduct 72-month follow-up interviews with the total sample. Instead, a subsample of the nine sites will be purposively selected as described in Section B.1.2 and then all the original PACE study participants in the each of the subsampled sites will be designated for follow-up.
PACE participants were recruited by the individual programs. Program staff recruited individuals, determined eligibility, and—if the individual was determined eligible—obtained informed consent from those who volunteered to be in the study. For those who consented, program staff collected baseline data, which included the Basic Information Form (BIF) and Self-Administered Questionnaire (SAQ), and the research team collected follow-up data with the 15-month follow-up survey and the 36-month follow-up survey. OMB approved these forms under previous requests for approval (OMB No. 0970-0397). Program staff entered information from the BIF into a web-based system developed specifically for the evaluation. Staff then used the system to conduct random assignment to the treatment or control group. Those assigned to the treatment group were offered the provided services while those assigned to the control group were not able to participate in the program but could access other services in the community. Exhibit B-1 summarizes the process described above.
Exhibit B-1: PACE Study Participant Recruitment and Random Assignment Process
For the 15-month and 36-month surveys, all PACE participants were targeted for the survey. Additionally, for every PACE participant with minor children, a focal child was selected for a child outcome module at 36 months. Program impacts on long-term earnings and college degree attainment will be assessed at all PACE sites using administrative data. Program impacts on broader measures of economic independence and family well-being will be measured with the 72-month survey. Given budget constraints, ACF has decided to sample 6,000 PACE participants at most for the 72-month survey. With an estimated 74 percent response rate, this will results in 4,400 completed interviews. The 72-month survey sample has not yet been selected, but the plan is to select all participants at a subset of PACE sites where the early-term results are promising such that additional follow-up to estimate long-term impacts is warranted. The site selection plan is described below, followed by a description of how the focal children were selected for the child outcome module. At 72 months, focal child sample selection will be based on whether or not a focal child was selected at the time of the 36-month follow-up. That is, if a participant had a focal child selected for the child outcome module at 36-months, that same child will be selected for the child outcome module at 72-months.
This section describes the plan for deciding in which sites to administer the 72-month survey to participants.
The research team will prioritize the following factors when considering individual sites criteria:
(1) The likelihood of long-term impacts on the outcomes measured in the 72-month survey such as credential attainment, poverty, welfare dependence, economic self-sufficiency, health insurance, being employed with benefits, food security, mental health, debt, resilience to financial shocks, and child well-being;
(2) The sufficiency of power to reliably detect these impacts should they occur;
(3) Diversity across sites (e.g., organization type: college-/workforce board/nonprofit-based program or intervention type: college degree programs, or occupational training) if there are more viable candidate sites based on the above two criteria than the sample size allows.
The research team will apply these criteria site-by-site.
Likelihood of Long-Term Impacts on Broad Measures of Economic Independence and Family Well-being: Based on its cumulative experience, the research team came to the judgment that factors such as the plausibility of a program’s logic model or the quality of implementation by themselves are poor predictors of impacts in the long term. Instead, the study team will primarily use an empirically driven approach that looks for shorter-term impacts on earnings to suggest a program is likely to produce long-term impacts on broad measures of economic independence and family well-being.
The study team will estimate shorter-term impacts on earnings with data from the National Directory of New Hires (NDNH). NDNH data is preferred over the 15-month survey for this purpose because it allows for estimation of impacts at a longer follow-up period, 24 months. (The 36-month survey can’t be used because it is still underway.) In addition to looking for impacts on earnings through 24 months with the NDNH, we will also, for a similar time frame, estimate impacts on college degree attainment with National Student Clearinghouse (NSC) data. Sites with positive impacts on earnings or degree attainment through 24 months will be preferred for the 72-month survey.
Adequacy of Power: Although the likelihood of long-term effects on economic independence and family well-being would be a necessary condition for justifying a 72-month survey, it would not be sufficient unless there was also adequate power in the site to reliably detect impacts on the outcomes in the long term. Our approach to determining whether power is adequate is to estimate minimum detectable effects (MDEs)—given the program’s sample size and expected survey response rate—for the outcomes that we would potentially include in a survey and compare that with plausible impacts for these outcomes. Based on this analysis, the PACE sites will be compared based on the strength of early results relative to their respective MDEs.
The PACE programs that are projected to most likely be able to detect plausible effects will be candidates for the 72-month survey. If there are more promising PACE programs than the survey sample allows, the programs will be selected from among the candidate programs to obtain a range of career pathway interventions.
Sampling Plan for Study of Impacts on Child Outcomes
Like the 36-month survey, the 72-month survey will have a child module to assess program impacts on the participants’ children. Within a given site, the 72-month survey will use the same focal child sample selected for the 36-month survey. The sample selection procedures are described here.
The child module in the 72-month survey will be asked of all respondents who had at least one focal child selected for the 36-month follow-up survey. The focal child selection at 36-months was confined to participants who reported at least one child in these households at baseline that would be between the ages of 3 and 18 years at the time of the 36-month survey. A given respondent was asked to confirm that the selected focal child resided with the respondent more than half time during the 12 months prior to the 36-month survey administration. Each household was asked about a specific focal child in the household regardless of how many children were eligible. The 36-month sampling plan selected approximately equal numbers of children from each of three age categories at baseline: preschool-age children aged 3 through 5 and not yet in kindergarten; children in kindergarten through grade 5; and children in grades 6 through 12. The procedure for selecting a focal child from each household depended on the configuration of children from each age category present in the household. Specifically, there are seven possible household configurations of age groups in households with at least one child present:
Preschool child(ren) only
Child(ren) in K – 5th grade only
Children in 6th – 12th grades only
Preschool and K -5th grade children
K – 5th grade and 6th – 12 grade children
Preschool and children 6th – 12 grade children
Preschool-age, K – 5th grade, and 6th – 12 grade children.
The sampling plan was as follows:
For household configurations 1, 2, and 3, only one age group was available for the sample, so the focal child was selected from that age group; if there were multiple children in that age group, one child was selected at random.
For household configurations 4 and 5, a K – 5th grade child was selected from a random 30 percent of households and a child from the other age category in the household (preschool-age or 6th – 12th grade) was selected from 70 percent of households.
For household configuration 6, a preschool child was selected from a random 50 percent of the households, and a child in 6th – 12th grade was selected from the remaining 50 percent of the households.
For household configuration 7, a child in K – 5th grade was selected from a random 20 percent of households, a preschool-age child was selected from a random 40 percent of households, and a child in 6th – 12th was selected from the remaining 40 percent of households.
Sampling weights will be used to account for the differential sampling rates for some child age categories in some household configurations. By applying the sampling weights, the sample for estimating program impacts on children will represent the distribution of the seven household configurations among study households.
At 72-months, the focal child module will be asked about the same child selected at 36-months, if the child lived with the respondent at least half the time in the past 12 months. If the focal child did not live with the respondent at least half the time in the past 12 months, no child module will be administered. At the time of the 72-month follow-up survey, all focal children are expected to be in kindergarten or above. A small proportion may even be age 18 or over. The child outcome module of the survey was modified for the 72-month survey to ask appropriate questions for the children’s age ranges 72 months after study enrollment.
Overall, the research team expects response rates to be sufficiently high in this study to produce valid and reliable results that can be generalized to the universe of the study. The response rate for the baseline data collection is 100 percent. The response rate for the 15-month follow-up survey is 77.2 percent and as of April 11, 2017 the response rate for the 36-month follow-up survey (ongoing) is currently 76 percent for the sample cohorts that have been completed. Given the experience on the 15-month and 36-month surveys and the longer time period since random assignment, we are expecting a 74 percent response rate on the 72-month follow-up survey. At 36 months, we attempted to conduct follow-up interviews with the full baseline sample, including non-respondents at 15 months. So far, we have succeeded in getting interviews at 36 months with about 10 percent of the sample whom we could not interview at 15 months. We plan to use the same rule at 72 months, attempting to collect information on the full initially randomized sample in selected sites.
For overall treatment impact estimation, the research team will use multivariate regression. The team will include individual baseline covariates to improve the power to detect impacts. The team will estimate the ITT (intention to treat) impact for all sites pooled and will conduct pre-specified subgroup analyses. In general, analyses that use administrative data will use everyone who gave informed consent during the randomization period for PACE. Analyses that rely on survey data will use everyone who gave informed consent during the randomization period and who responds to the follow-up survey. Analyses based on the survey data will deal with nonresponse by including covariates in the impact estimation models as well as using nonresponse weights, as described in the earlier study’s Analysis Plan.
The research team has estimated the MDEs. As shown in Exhibit B-2 below, the MDE is the smallest true impact that the study will have an 80 percent probability of detecting when the test is for the hypothesis of “no impact” and has just a 10 percent chance of finding an impact if the true impact is zero.
MDE estimates for two sample sizes are shown in Exhibit B.2—one for the typical PACE site (a site that randomly assigned 500 to treatment and 500 to control), and one for Year Up (1,670 treatment, 872 control). MDEs are displayed for annual household income, a key outcome that will be measured in the 72-month survey.
Exhibit B-2: Minimum Detectable Effects for Annual Household Income
Randomly Assigned |
Sample Size for Calculation of MDE With 74 Percent Response Rate |
Average Annual Household Income MDE |
500 T: 500 C (typical PACE program) |
370 T: 370 C |
$288 |
1,670 T; 872 C (Year Up) |
1,236 T: 645 C |
$253 |
Note: MDEs based on 80% power with a 10% significance level in a one-tailed test, assuming baseline variables explain 30% of the variance in income. The mean and the variance estimate for income in both sample size categories comes from tabulations from 15-month survey data. MDEs are 2% larger than for the 15-month follow-up given the expected 4% decrease in the respondent sample size.
The research team estimates these MDEs are sufficient to detect impacts likely to be policy relevant in each site.
In our experience with the 15- and 36-month surveys, the PACE study population is more responsive to outreach attempts by local interviewers with local phone numbers than they are to calls from our telephone research center. Thus, for the 72-month follow-up survey, local field interviewers will conduct all interviews. Interviewers will conduct the 72-month follow-up survey using encrypted devices equipped with computer-assisted personal interviewing (CAPI) technology. CAPI technology, by design, will not allow inadvertent missing responses and will reduce the number of outlying responses to questions—such as exceptionally high or low hours worked per week. CAPI technology also houses the complete participant contact history, interviewer notes, and case disposition summaries, eradicating the need for paper records in the field. Our contractor (Abt Associates) will conduct the survey.
Prior to the interview, interviewers will be hired and trained, an advance letter will be sent, and the interviewer will send an email reminder.
Interviewer Staffing: An experienced, trained staff of interviewers will conduct the PACE 72-Month Follow-Up Survey. The training includes didactic presentations, numerous hands-on practice exercises, and role-play interviews. Special emphasis will be placed on project knowledge and sample complexity, gaining cooperation, refusal aversion, and refusal conversion.
Abt maintains a roster of approximately 1,700 experienced in-person interviewers across the country. To the extent possible, the new study will recruit in-person interviewers who worked successfully on the PACE 15-month and 36-month surveys. These interviewers are familiar with the study and have already developed rapport with respondents and gained valuable experience locating difficult-to-reach respondents.
All potential in-person interviewers will be carefully screened for their overall suitability and ability to work within the project’s schedule, as well as the specific skill sets and experience needed for the study (e.g., previous in-person data collection experience, strong test scores for accurately recording information, attention to detail, reliability, and self-discipline).
Advance Letter: To support the 72-month survey effort, an advance letter will be mailed to participants approximately one and a half weeks before we start to call them for interviews. The advance letter serves as a way to re-engage the participant in the study, alert them to the upcoming effort so that they are more prepared for the interviewer’s call. (Advance Letters are found in Appendices L1 for the 36-month survey and L2 for the 72-month survey.)
The sample file is updated prior to mailing using Accurint to help ensure that the letter is mailed to the most up to date address. This is a personalized letter that will remind participants of their agreement to participate in the study and that interviewers will be calling them for the follow-up interview. The letter also assures them that their answers will be kept private and provides them with a toll-free number that they can call to set-up an interview. We may also send an email version of the advance letter to participants for whom we have email addresses.
Email Reminder: Interviewers attempt to contact participants by telephone first. If initial phone attempts are unsuccessful, interviewers can use their project-specific email accounts to introduce themselves as the local staff, explain the study and attempt to set up an interview. They send this email, along with the advance letter, about halfway through the time during which they are working the cases. (See Appendices K1 for the 36-month survey email reminder text and K2 for the 72-month survey email reminder text.)
Data collection begins when local interviewers attempt to reach the study participants by telephone. The interviewers will call the numbers provided for the respondent first and then attempt any alternate contacts. Interviewers will have access to the full history of respondent telephone numbers collected at baseline and updated throughout the 72 month follow-up period.
Interviewers will dial through the phone number history, beginning with the most recent. They will dial through the history of phone numbers for the participant first then they will attempt to reach alternate contacts. Once all phone efforts are exhausted, interviewers will start to work non-completed cases in person. Interviewers may leave specially designed project flyers and Sorry-I-Missed-You cards with family members or friends (see Appendices J1 and J2 for the 36-month and 72-month surveys respectively).
All study materials designed for PACE participants will be available in English and Spanish. Interviewers will be available to conduct the Participant Follow-Up survey interview in either language. Persons who speak neither English nor Spanish, deaf persons, and persons on extended overseas assignment or travel will be ineligible for follow-up, but interviewers will collect information on reasons for ineligibility. Persons who are incarcerated or institutionalized will also be ineligible for follow-up.
The goal will be to administer the follow-up survey to all study participants in each site selected for the 72-month survey, reaching a target response rate of at least 74 percent. To achieve this response rate, the PACE team has developed a comprehensive plan to minimize sample attrition and maximize response rates. This plan involves regular tracking and locating of all study participants, providing tokens of appreciation, and sample control during the data collection period.
Given the time between the 36-month follow-up surveys and the 72-month follow-up surveys, it is likely that many study participants will have changed telephone numbers (landlines as well as cell phones) and addresses, in some cases multiple times. Accurate locating information is crucial to achieving high survey response rates in any panel study; it is particularly relevant for a 72-month follow-up study.
The research team has developed a contact update approach that includes innovative and creative new ideas to supplement protocols used on the earlier surveys. The team will still use a mix of activities that build in intensity over time. We will use passive searches early on (batch Accurint processing) in the effort and then build up to more intensive active contact update efforts during the 12 months leading up to the release of the 72-month survey. Abt Associates has used these approaches on other studies of traditionally difficult-to-reach populations, such as homeless families and TANF recipients. Exhibit B-3 shows our planned contact update activities and the schedule for conducting each one—in terms of the number of months prior to the 72-month survey data collection.
Exhibit B-3: Participant Contact Update and Data Collection Schedule
Contact Update Activity |
Timing (Months prior to 72-month survey) |
Accurint (Passive) Lookups |
Every 6 months starting in February 2017 |
Participant Contact Updates |
12 |
Participant Newsletter |
8 |
Contact Update Check-in Call |
4 |
Long-Term Survey |
0 |
Participant locating in the Field |
From start of survey to survey completion or cohort closed |
The first contact update activities will be a passive database search and followed by our standard participant contact update mailing. (See Appendices F and G for contact update letters.) Passive database searches allow us to obtain updated respondent contact data using proprietary databases of vendors such as Accurint. This way, the study team can update participant contact data quickly and efficiently, without any respondent burden. We will conduct Accurint lookups for the study sample every 6 months.
Twelve months before a participant is scheduled to be interviewed, participants will be mailed a standard contact update mailing for all potential survey sample members. The letter will invite them to update their contact information as well as information for up to three alternative contacts. This contact update will help reestablish communication with study participants who have not had any recent interaction with the study.
Participant newsletter (Appendix I): To reengage the study participants selected for the 72-month long-term survey sample, our team needs to catch their attention, draw them back in and help re-engage them in the study. This participant newsletter will start by thanking participants for their continued cooperation and emphasize the importance of their participation—even if they were assigned to the control group and did not participate in the program. It will include some study highlights from the PACE study. Finally, it will explain the remaining data collection activities and offer participants an opportunity to update their contact information via the online system or on paper. This newsletter will be sent eight months prior to the release of the 72-month survey, replacing the standard contact update letter that would have gone out at that time.
Monetary incentives show study participants that the research team appreciates the time they spend participating in study information collection activities. Although evidence of the effectiveness of incentives in reducing nonresponse bias appears to be nearly nonexistent, it is well established that incentives strongly reduce attrition in panel studies.2
In longitudinal studies such as PACE, panel retention during the follow-up period is critical to minimizing the risk of nonresponse bias and to achieving sufficient sample for analysis. Although low response rates do not necessarily lead to nonresponse bias and it is at least theoretically possible to worsen nonresponse bias by employing some techniques to boost response rates (Groves, 2006), most statisticians and econometricians involved in the design and analysis of randomized field trials of social programs agree that it is generally desirable to obtain a response rate as close to 80 percent as possible in all arms of the trial (Deke and Chiang, 2016). The work of Deke and Chiang underlies the influential guidelines of the What Works Clearinghouse (WWC). Under those guidelines, the evidential quality rating of an evaluation is downgraded if the difference in response rates between arms exceeds a certain tolerance (e.g., four percentage points when the overall response rate is 80 percent).
Mindful of these risks and the solid empirical base of research demonstrating that incentives do increase response rates, OPRE and OMB authorized incentives for the prior rounds of data collection at 15 and 36 months (OMB control number 0970-0397). At 15 months, the incentive was $30 for completing the interview and $5 for updating their contact information in advance of the scheduled interview time. With these incentives, PACE achieved response rates for the treatment groups varying from 72.4 percent to 90.8 percent across the nine sites. Response rates on the control groups were generally lower, varying from 68.5 percent to 79.4 percent. Overall, PACE experienced a differential response rate of 5.1 percentage points. Site-specific gaps varied from -1.0 to +13.1 percentage points.
Given these response rates and gaps, at 36 months the conditional incentive for completing the main interview was increased to $40, the $5 incentive for updating contact information was changed to a $2 prepayment included in the request for a contact update, and a prepayment of $5 was added to the advance letter package to remind them of the study and take note that a legitimate interviewer would be calling them shortly to learn about their experiences since study enrollment.
In most longitudinal studies, response rates decline over follow-up rounds. Abt Associates is currently about mid-way through data collection for the 36-month follow-up. Perhaps due to the increased incentives, among other efforts, as of April 11, 2017, the average response rate is about 1.2 points lower than it was for the 15-month follow-up. Of course, the 72-month follow-up is three years later and locating challenges will be even greater. Given a target of a 74 percent response rate for the 72-month follow-up, we feel it would be wise to further increase incentives, as well as to slightly restructure them.
The amounts proposed (subject to OMB approval) for the 72-month survey and contact update responses are as follows:
token of appreciation valued at $5 for responding to the first contact update letter;
token of appreciation valued at $10 for the completing the check-in call;
token of appreciation valued at $45 for completing the survey.
Finally, the team does not rely solely on the use of tokens of appreciation to maximize response rates—and therefore reduce nonresponse bias. The next section summarizes other efforts to maximize response rates.
During the data collection period, the research team will minimize nonresponse levels and the risk of nonresponse bias by:
Using trained interviewers who are skilled at working with low-income adults and skilled in maintaining rapport with respondents, to minimize the number of break-offs and incidence of nonresponse bias.
Providing a Spanish language version of the survey instrument. Interviewers will be available to conduct the interview in Spanish to help achieve a high response rate among study participants with Spanish as a first language. Based on our experience on the earlier surveys, we know which sites need Spanish language interviewers and Abt will hire the most productive bilingual interviewers from these survey efforts.
Using a mixed-mode approach, a telephone interview with an in-person follow-up, but having local interviews do both the telephone and in-person interviews. Our experience on the earlier surveys is that local interviewers are more likely than the phone center to get their calls answered. This will also allow the local interviewer working the sample earlier and they will have firsthand knowledge of the obstacle encountered if a phone interview cannot be completed.
Using contact information updates, a participant newsletter, and check-in calls to keep the sample member engaged in the study and to enable the research team to locate them for the follow-up data collection activities. (See Appendix F for a copy of the contact information update letter.)
Using the participant newsletter to clearly convey the purpose of the survey to study participants and provide reassurances about privacy, so they will perceive that cooperating is worthwhile. (See Appendix G for a copy of the participant newsletter.)
Sending email reminders to non-respondents (for whom we have an email address) informing them of the study and allowing them the opportunity to schedule an interview (Appendix I).
Providing a toll-free study hotline number—which will be included in all communications to study participants—for them to use to ask questions about the survey, to update their contact information, and to indicate a preferred time to be called for the survey.
Taking additional locating steps in the field, as needed, when the research team does not find sample members at the phone numbers or addresses previously collected.
Using customized materials in the field, such as “Sorry I Missed You” flyers with study information and the toll-free number (Appendix J).
Requiring the survey supervisors to manage the sample to ensure that a relatively equal response rate for treatment and control groups in each PACE site is achieved.
Through these methods, the research team anticipates being able to achieve the targeted 74 percent response rate for the follow-up survey. As discussed in Section B.1, the targeted rate of 74 percent is based on our response rates from the 15-month survey and the early results from the 36-month survey.
Regardless of the final response rate, we will construct nonresponse adjustment (NRA) weights. If we achieve a response rate below 80 percent as we expect, we will conduct a nonresponse bias analysis. Using both baseline data collected just prior to random assignment and post-random assignment administrative data from NSC (and possibly from NDNH), we will estimate—separately for the treatment and control groups in a site—response propensity using a logistic regression model. Within each PACE program, study participants will be allocated to nonresponse adjustment cells defined by the intervals of response propensity. Each cell will contain approximately the same number of study participants. Within each nonresponse adjustment cell, the empirical response rate will be calculated. Respondents will then be given NRA weights equal to the inverse empirical response rate for their respective cell.
An alternative propensity adjustment method could use the directly modeled estimates of response propensity. However, these estimates can sometimes be close to zero, creating very large weights, which in turn lead to large survey design effects. The use of nonresponse adjustment cells typically results in smaller design effects. The number of cells will be set as a function of model quality. The empirical response rates for a cell should be monotonically related to the average predicted response propensity. We will start with a large number of cells and reduce that number until we obtain the desired monotonic relationship.
Once provisional weights have been developed, we will look for residual nonresponse bias by comparing the estimates of the effects of PACE programs on administrative outcomes estimated with the NRA weights in the sample of survey respondents vs. the estimates of the same effects estimated on the entire randomized sample (including survey nonrespondents) without weights. If these estimates are similar (e.g., within each other’s confidence intervals), we will be reasonably confident that we have ameliorated nonresponse bias. If, on the other hand, there are important differences, then we will search for ways to improve our models and recalculate the weights as in Judkins, et al. (2007). If use of the weights does not improve the congruence, then we will conduct survey analyses without weights. This decision will be made separately for each PACE program.
In designing the 72-month follow-up survey, the research team included items used successfully in previous waves (15-month or 36-month follow-up surveys) or in other national surveys. Consequently, many of the survey questions have been thoroughly tested on large samples.
To ensure the length of the instrument is within the burden estimate, we took efforts to pretest with fewer than 10 people and edit the instruments to keep burden to a minimum. During internal pretesting, all instruments were closely examined to eliminate unnecessary respondent burden and questions deemed unnecessary were eliminated.
The individuals shown in Exhibit B-4 assisted ACF in the statistical design of the evaluation.
Exhibit B-4: Individuals Consulted on the Study Design
Name |
Role in Study |
Larry Buron Abt Associates Inc. |
Project Director |
Dr. David Fein Abt Associates Inc. |
Principal Investigator on PACE 15-month study and Co-PI of 36-and 72-month follow-up studies |
David Judkins Abt Associates Inc. |
Director of Analysis and sampling statistician |
Laura R. Peck Abt Associates Inc. |
Co-PI on HPOG-Impact15-month study 36-and 72-Month follow-up studies |
Alan Werner Abt Associates Inc. |
Co-PI on HPOG-Impact15-month study 36-and 72-Month follow-up studies |
Inquiries regarding the statistical aspects of the study’s planned analysis should be directed to:
Larry Buron Project Director
David Fein Co-Principal Investigator
David Judkins Director of Analysis
Nicole Constance Office of Planning, Research & Evaluation
Administration of Children and Families, US DHHS
Deke, J. and Chiang, H. (2016). The WWC attritional standard: Sensitivity to assumptions and opportunities for refining and adapting to new contexts. Evaluation Review. [epub ahead of print]
Groves, R. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70, 646-675.
Judkins, D., Morganstein, D., Zador, P., Piesse, A., Barrett, B., Mukhopadhyay, P. (2007). Variable Selection and Raking in Propensity Scoring. Statistics in Medicine, 26, 1022-1033.
Lynn, P. (ed.) (2009). Methodology of Longitudinal Surveys. Chichester: John Wiley & Sons.
1 The Health Profession Opportunity Grants (HPOG) program provides education and training to Temporary Assistance to Needy Families recipients and other low-income individuals for occupations in the health care field that pay well and are expected to either experience labor shortages or be in high demand. The HPOG program is administered by the Office of Family Assistance within ACF. In FY 2010, $67 million in grant awards were made to 32 entities located across 23 states, including four tribal colleges and one tribal organization. These demonstration projects were intended to address two challenges: the increasing shortfall in supply of healthcare professionals in the face of expanding demand; and the increasing requirement for a post-secondary education to secure a well-paying job. Grant funds could be used for training and education as well as supportive services such as case management, child care, and transportation. (See: https://www.acf.hhs.gov/ofa/programs/hpog.)
2 See Chapter 12 of Lynn (2009), in particular, section 12.5 that reviews the effects of incentives in several prominent longitudinal studies.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Abt Single-Sided Body Template |
Author | Missy Robinson |
File Modified | 0000-00-00 |
File Created | 2021-01-21 |