Memorandum United States Department of Education
Institute of Education Sciences
National Center for Education Statistics
DATE: August 16, 2017
TO: Robert Sivinski, OMB
THROUGH: Kashka Kubzdela, OMB Liaison, NCES
FROM: David Richards, BPS:12/17 Project Officer, NCES
Tracy Hunt-White, Team Lead, Postsecondary Longitudinal and Sample Surveys, NCES
SUBJECT: 2012/17 Beginning Postsecondary Students Longitudinal Study (BPS:12/17) Abbreviated Interview Change Request (OMB# 1850-0631 v.16)
The 2012/17 Beginning Postsecondary Students Longitudinal Study (BPS:12/17) is conducted by the National Center for Education Statistics (NCES), within the U.S. Department of Education (ED). BPS is designed to follow a cohort of students who enroll in postsecondary education for the first time during the same academic year, irrespective of the date of high school completion. Data from BPS are used to help researchers and policymakers better understand how financial aid influences persistence and completion, what percentages of students complete various degree programs, what are the early employment and wage outcomes for certificate and degree attainers, and why students leave school. The request to conduct the BPS:12/17 full-scale data collection was approved by OMB in December 2016 with the latest change request approved in July 2017 (OMB# 1850-0631 v.10-15). This request is to: (1) begin the next stage of the BPS:12/17 responsive design plan by offering a specific incentive strategy (an abbreviated interview) to targeted members of the main BPS:12/17 sample, based on recent modeling using study nonrespondents, (2) request approval for additional student contacting materials (reminder and prompting emails), and (3) revise text for the confidentiality pledge and citation of security and confidentiality requirements in BPS:12/17 recruitment and data collection materials. We have also corrected an error in the titles of appendices that are listed in the Supporting Statement Part A of this submission. This request does not increase respondent burden or the cost to the federal government.
BPS:12/17 responsive design plan
The first stage of the BPS:12/17 responsive design plan was a calibration sample, completed and described in an earlier change request (OMB# 1850-0631 v.13). This was followed by an incentive boost targeting sample members whose responses could help reduce bias, as described in the most recent change request (OMB# 1850-0631 v.15). This current change request describes the next intervention of the BPS:12/17 responsive design plan, the selection and targeting of cases for an early abbreviated interview. This is expected to be the final change request for the BPS:12/17 responsive design plan.
This change memorandum summarizes results of the modeling and targeting activities performed in accordance with the procedures described in the approved Supporting Statement Part B (OMB# 1850-0631 v.14). Attachment A, appended in this document, provides an excerpt from Part B that describes the responsive design plan, including the results of the BPS:12/14 responsive design approach (Section 4.a) and the BPS:12/17 plan (Section 4.b). Attachment B, also appended in this document, presents the variables used in the BPS:12/17 responsive design plan models, along with the relevant analyses results.
Targeting cases for an early abbreviated interview
Targeting cases for the early abbreviated interview was performed using the same methods as targeting cases for the incentive boost. In mid-August, 2017, eligible survey nonrespondents were selected for bias likelihood modeling. The allocation of sample members for modeling is shown in Table 1. In total 14,331 current nonrespondents were included in the bias likelihood model. Results were combined with a priori response propensity modeling data, constructed from NPSAS:12 and BPS:12/14 data, to produce the importance measure.
Table 1. Allocation of sample for response propensity and bias likelihood modeling
BPS:12/17 sample |
Number of cases |
Included in a priori propensity model |
Included in bias-likelihood model |
Eligible for targeting |
Total |
33,728 |
|
|
|
Deceased |
3 |
No |
No |
No |
Double-Nonrespondents |
3,271 |
Yes |
No |
No |
Control cases1 |
3,152 |
Yes |
Yes |
No |
Current respondents2 |
_14,600 |
Yes |
Yes |
No |
Current other final codes3 |
669 |
Yes |
Yes |
No |
Current nonrespondents2 |
12,033 |
Yes |
Yes |
Yes |
1 Note that 463 more control cases are double nonrespondents.
2 Response status through 8/14/2017 for cases that are not deceased, double nonrespondents, or controls.
As described in Attachment A, below, the BPS:12/17 responsive design plan aims to reduce the impact of unit nonresponse within institution type (or sector). Institution types are grouped as follows:
Institution Group A: Public less-than 2-year, Public 2-year
Institution Group B: Public 4-year non-doctorate-granting; Public 4-year doctorate-granting; Private nonprofit less than 4-year; Private nonprofit 4-year non-doctorate-granting; Private nonprofit 4-year doctorate-granting
Institution Group C: Private for profit less-than-2-year
Institution Group D: Private for profit 2-year
Institution Group E: Private for profit 4-year
The individual nonresponse rates within these institution groups at the conclusion of the BPS:12/14 student interview were reviewed and compared, and are shown here in Table 2. Our targeting approach is to select cases with high importance scores in proportion to the institution group nonresponse rates in BPS:12/14, in order to reduce nonresponse bias by increasing the responses of these targeted cases within each of the institution groups. Proportional targeting begins by taking the total number of nonrespondent cases by institution group and subtracting the control cases and exclusions. The resulting number is multiplied by the nonresponse rate for that sector in BPS:12/14 (e.g., multiplying 3,333 by 32.6 percent for institution group A). The resulting numbers of targeted cases to be offered the early abbreviated interview are shown in Table 2, by institution group. The total of 3,032 targeted nonrespondents is approximately 30 percent of the total available current nonrespondents.
Table 2. BPS:12/14 final unweighted nonresponse rates and targeted cases by institution group
Institution group |
BPS:12/14 final unweighted nonresponse rate (percent) |
BPS:12/17 eligible nonrespondent cases (number) |
Cases to target (number) |
A |
32.6 |
3,333 |
1,087 |
B |
20.7 |
2,657 |
550 |
C |
42.9 |
264 |
114 |
D |
36.2 |
901 |
327 |
E |
34.9 |
2,735 |
954 |
Total |
30.3 |
9,890 |
3,032 |
Note: BPS:12/17 eligible nonrespondents (less control cases and exclusions) as of 8/14/17.
While response rates are not a good indicator of nonresponse bias across studies (Groves and Peytcheva, 2008), different response rates within a study have been observed to covary with survey estimates (e.g., Peytchev, 2013), which is indicative of a link between response rates and nonresponse bias. The described approach aims to allocate greater effort to the sectors with lower response rates, for which there is increased risk of nonresponse bias.
As with the prior application of targeted incentives, propensity scores above high and low cutoffs are again recommended for exclusion as potential targets during data collection. Excluding cases with high and low propensity scores is a somewhat subjective decision, intended to focus resources where they are most likely to be effective, minimizing efforts with sample members who are either very likely to respond without additional treatment or very unlikely to respond no matter what treatments are offered.
Statistical processes such as outlier analysis and distance to the mean value were considered for this exclusion process, but were not used because excluded cases are not outliers in a statistical distribution sense. K-means clustering was tested with targeting data in the prior round (see OMB# 1850-0631 v.15), but results suggested excluding more cases than by visual inspection, and presented concerns associated with reducing the effectiveness of the intervention.
Results of the BPS:12/17 responsive design plan are expected to provide value beyond the bias reduction with this study. The results can be useful to other studies with similar intervention targeting considerations, and can provide a foundation for assumptions about the impacts of treatments on increasing response rates which, in turn, can be used to explore methods for identifying exclusions. We will continue to explore additional methods to focus limited resources on cases most likely to provide the desired results. Discriminant function analysis is one possible method for further investigation, and the results of the current targeting activities may be useful in assessing such methods.
Unlike the prior $45 incentive boost intervention, this early abbreviated interview does not have an easily-calculated monetary cost. However, the interview items removed when abbreviating the interview result in a loss of data for abbreviated interview completers. Removed items would have collected data on non-federal financial aid amounts, detailed employer data, and financial literacy. A complete list of the items included in the abbreviated interview can be found in Appendix G. Due to the desirability of these data, the early abbreviated interview is reserved as an incentive to encourage completion of only targeted cases, whose completion is expected to help reduce nonresponse bias. The abbreviated interview will also be offered to all remaining nonrespondents at a later date, in mid-September 2017, approximately one month before the end of data collection.
Data collection with the 3,032 sample members identified for targeting is scheduled to begin after the 25th week of data collection, on or about August 23, 2017. Data collection activities for non-targeted cases will continue as planned. No additional targeted interventions are planned.
Additional student contacting emails (Appendix E)
We plan to send up to eight additional prompting emails, as needed, to sample members who have not responded to the survey. In the original OMB clearance (OMB# 1850-0631 v.10) ten reminder emails were approved. We request clearance for six new reminder emails, which use substantively the same text as the original ten. We also request approval of two new emails, one sent from the contractor study director and one from the NCES project officer. The text for all eight emails is included in a revised Appendix E. The text for the previously-approved emails will not change.
Updates to text addressing study confidentiality, authority to conduct the study, and the Paperwork Reduction Act (Appendices E, F, G, and H)
Citations and text describing data security and confidentiality protection procedures, NCES’s authority to conduct the study, and the Paperwork Reduction Act have been revised as part of an effort to make such text consistent across NCES studies and to account for the Cybersecurity Enhancement Act of 2015. Revisions have been made wherever this language appears in materials aimed at student or institution respondents, such as request letters and websites. This text has only been revised in materials that are still being used or will be used in the future. Materials that are no longer being used, such as the student brochure, have not been revised in the attached appendixes.
The following is a summary of edits to the citations and text in the attached appendixes:
Appendix E, Communication Materials for Survey Respondents
The following items were revised: BPS:12/17 Student Survey Website Text, Confidentiality Procedures at https://surveys.nces.ed.gov/bps/confidentiality.aspx.
The following materials were not revised because they are no longer needed: BPS:12/17 Brochure Text, Contact Information Update Letter – Parent, Contact Information Update Form – Parent, Initial Contact Letter – Student, Data Collection Announcement Letter, Reminder Letter/Targeted Experiment Announcement Letter.
Appendix F, Transcript and Student Records Contacting Materials
All materials may be used in future contacts and thus all materials were subject to review and updates (these edits affected all text describing authorization to conduct these studies, confidentiality, and the Paperwork Reduction Act, which are commonly found in letters to institutions, including in footers, and in “about the study” materials, such as websites or brochures).
Appendix G, Interview Instrument Facsimile
The confidentiality, authorization, and PRA statement was revised.
Appendix H, Student Records Instrument Facsimile
The confidentiality, authorization, and PRA statement was revised.
Groves, R. & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72 (2), 167–189.
Peytchev, A. (2013). Consequences of survey nonresponse. The Annals of the American Academy of Political and Social Science, 645(1), 88–111.
Attachment A. Excerpt from the approved Supporting Statement Part B (OMB# 1850-0631 v.14)
Tests of Procedures and Methods
The design of the BPS:12/17 full-scale data collection—in particular, the use of responsive design principles to reduce bias associated with nonresponse—expands on data collection experiments designed for several preceding NCES studies and, particularly, on the responsive design methods employed in BPS:12/14. Section B.4.a below provides an overview of the responsive design methods employed for BPS:12/14, section B.4.b provides a description of the proposed methods for BPS:12/17, and section B.4.c describes the tests that will be conducted through the BPS:12 PETS pilot study.
BPS:12/14 Full Scale1
The BPS:12/14 full-scale data collection combined two experiments in a responsive design (Groves and Heeringa 2006) in order to examine the degree to which targeted interventions could affect response rates and reduce nonresponse bias. Key features included a calibration sample for identifying optimal monetary incentives and other interventions, the development of an importance measure for use in identifying nonrespondents for some incentive offers, and the use of a six-phase data collection period.
Approximately 10 percent of the 37,170 BPS:12/14 sample members were randomly selected to form the calibration sample, with the remainder forming the main sample, although readers should note that respondents from the calibration and main sample were combined at the end of data collection. Both samples were subject to the same data collection activities, although the calibration sample was fielded seven weeks before the main sample.
First Experiment: Determine Baseline Incentive. The first experiment with the calibration sample, which began with a web-only survey at the start of data collection (Phase 1), evaluated the baseline incentive offer. In order to assess whether or not baseline incentive offers should vary by likelihood of response, an a priori predicted probability of response was constructed for each calibration sample member. Sample members were then ordered into five groups using response probability quintiles and randomly assigned to one of eleven baseline incentive amounts ranging from $0 to $50 in five dollar increments. Additional information on how the a priori predicted probabilities of response were constructed is provided below.
For the three groups with the highest predicted probabilities of response, response rates for a given baseline incentive offer ($0 to $25) were statistically higher than response rates for the next lowest incentive amount up to $30. In addition, response rates for incentives of $35 or higher were not statistically higher than response rates at $30. For the two groups with the lowest predicted probabilities of response, the response rate at $45 was found to be statistically higher than the response rate at $0, but the finding was based on a small number of cases. Given the results across groups, a baseline incentive amount of $30 was set for use with the main sample. Both calibration and main sample nonrespondents at the end of Phase 1 were moved to Phase 2 with outbound calling; no changes were made to the incentive level assigned at the start of data collection.
Second Experiment: Determine Monetary Incentive Increase. Phase 3, a second experiment implemented with the calibration sample, after the first 28 days of Phase 2 data collection, determined the additional incentive amount to offer the remaining nonrespondents with the highest “value” to the data collection, as measured by an “importance score” (see below). During Phase 3, 500 calibration sample nonrespondents with the highest importance scores were randomly assigned to one of three groups to receive an incentive boost of $0, $25, or $45 in addition to the initial offer.
Across all initial incentive offers, those who had high importance scores but were in the $0 incentive boost group had a response rate of 14 percent, compared to 21 percent among those who received the $25 incentive boost, and 35 percent among those who received the $45 incentive boost. While the response rate for the $25 group was not statistically higher than the response rate for the $0 incentive group, the response rate for the $45 group was statistically higher than the response rates of both the $25 and the $0 groups. Consequently, $45 was used as the additional incentive increase for the main sample.
Importance Measure. Phases 1 and 3 of the BPS:12/14 data collection relied on two models developed specifically for this collection. The first, an a priori response propensity model, was used to predict the probability of response for each BPS:12/14 sample member prior to the start of data collection (and assignment to the initial incentive groups). Because the BPS:12/14 sample members were part of the NPSAS:12 sample, predictor variables for model development included sampling frame variables and NPSAS:12 variables including, but not limited to, the following:
responded during early completion period,
interview mode (web/telephone),
ever refused,
call count, and
tracing/locating status (located/required intensive tracing).
The second model, a bias-likelihood model, was developed to identify those nonrespondents, at a given point during data collection, who were most likely to contribute to nonresponse bias. At the beginning of Phase 3, described above, and of the next two phases – local exchange calling (Phase 4) and abbreviated interview for mobile access (Phase 5) – a logistic regression model was used to estimate, not predict, the probability of response for each nonrespondent at that point. The estimated probabilities highlight individuals who have underrepresented characteristics among the respondents at the specific point in time. Variables used in the bias-likelihood model were derived from base-year (NPSAS:12) survey responses, school characteristics, and sampling frame information. It is important to note that paradata, such as information on response status in NPSAS:12, particularly those variables that are highly predictive of response but quite unrelated to the survey variables of interest, were excluded from the bias-likelihood model. Candidate variables for the model included:
highest degree expected,
parents’ level of education,
age,
gender,
number of dependent children,
income percentile,
hours worked per week while enrolled,
school sector,
undergraduate degree program,
expected wage, and
high school graduation year.
Because the variables used in the bias-likelihood model were selected due to their potential ability to act as proxies for survey outcomes, which are unobservable for nonrespondents, the predicted probabilities from the bias-likelihood model were used to identify nonrespondents in the most underrepresented groups, as defined by the variables used in the model. Small predicted probabilities correspond to nonrespondents in the most underrepresented groups, i.e. most likely to contribute to bias, while large predicted probabilities identify groups that are, relatively, well-represented among respondents.
The importance score was defined for nonrespondents as the product of a sample member’s a priori predicted probability of response and one minus the sample member’s predicted bias-likelihood probability. Nonrespondents with the highest calculated importance score at the beginning of Phases 3, 4, and 5, were considered to be most likely to contribute to nonresponse bias and, therefore, were offered the higher monetary incentive increase (Phase 3), were sent to field and local exchange calling (Phase 4), and were offered an abbreviated interview (Phase 5). An overview of the calibration and main sample data collection activities is provided in table 5.
Table 5. Summary of start dates and activities for each phase of the BPS:12/14 data collection, by sample
Phase |
Start date |
Activity |
||
|
Calibration subsample |
Main subsample |
Calibration subsample |
Main subsample |
1 |
2/18/2014 |
4/8/2014 |
Begin web collection; Randomize calibration sample to different baseline incentives (experiment #1) |
Begin web collection; baseline incentives determined by results of first calibration experiment |
2 |
3/18/2014 |
5/6/2014 |
Begin CATI collection |
Begin CATI collection |
3 |
4/8/2014 |
5/27/2014 |
Randomize calibration sample nonrespondents to different monetary incentive increases (experiment #2) |
Construct importance score and offer incentive increase to select nonrespondents; incentive increase determined by results of second calibration experiment |
4 |
5/6/2014 |
6/24/2014 |
Construct importance score and identify select nonrespondents for Field/local exchange calling for targeted cases |
Construct importance score and identify select nonrespondents for Field/local exchange calling for targeted cases |
5 |
7/15/2014 |
9/2/2014 |
Construct importance score and identify select nonrespondents for abbreviated interview with mobile access |
Construct importance score and identify select nonrespondents for abbreviated interview with mobile access |
6 |
8/12/2014 |
9/30/2014 |
Abbreviated
interview for |
Abbreviated
interview for |
CATI = computer-assisted telephone interviewing
Impact on Nonresponse Bias. As all BPS:12/14 sample members were submitted to the same data collection procedures, there is no exact method to assess the degree to which the responsive design reduced nonresponse bias relative to another data collection design that did not incorporate responsive design elements. However, a post-hoc analysis was implemented to compare estimates of nonresponse bias to determine the impact of the responsive design. Nonresponse bias estimates were first created using all respondents and then created again by reclassifying targeted respondents as nonrespondents. This allows examination of the potential bias contributed by the subset of individuals who were targeted by responsive design methods although this is not a perfect design as some of these individuals would have responded without interventions. The following variables were used to conduct the nonresponse bias analysis:2
Region (categorical);
Age as of NPSAS:12 (categorical);
CPS match as of NPSAS:12 (yes/no);
Federal aid receipt (yes/no);
Pell Grant receipt (yes/no);
Pell Grant amount (categorical);
Stafford Loan receipt (yes/no);
Stafford Loan amount (categorical);
Institutional aid receipt (yes/no);
State aid receipt (yes/no);
Major (categorical);
Institution enrollment from IPEDS file (categorical);
Any grant aid receipt (categorical); and
Graduation rate (categorical).
For each variable listed above, nonresponse bias was estimated by comparing estimates from base-weighted respondents with those of the full sample to determine if the differences were statistically significant at the 5 percent level. Multilevel categorical terms were examined using indicator terms for each level of the main term. The relative bias estimates associated with these nonresponse bias analyses are summarized in Table 6.
The mean and median percent relative bias are almost universally lowest across all sectors when all respondents are utilized in the bias assessment. The overall percentage of characteristics with significant bias is lowest when all respondents are utilized but the percentage of characteristics with significant bias is lowest in seven of the ten sectors when responsive design respondents are excluded. However, the percentage of characteristics with significant bias is affected by sample sizes and as there are approximately 5,200 respondents who were ever selected under the responsive design, the power to detect a bias that is statistically different from zero is higher when using all respondents versus a smaller subset of those respondents in a nonresponse bias assessment. Consequently, the mean and median percent relative bias are better gauges of how the addition of selected responsive design respondents impacts nonresponse bias.
Given that some of the 5,200 selected respondents would have responded even if they had never been subject to responsive design, it is impossible to attribute the observed bias reduction solely to the application of responsive design methods. However, observed reduction of bias is generally quite large and suggests that responsive design methods may be helpful in reducing nonresponse bias.
Table 6. Summary of responsive design impact on nonresponse bias, by institutional sector: 2014
1 Relative bias and significance calculated on respondents vs. full sample.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2012/14 Beginning Postsecondary Students Longitudinal Study (BPS:12/14).
The responsive design methods proposed for BPS:12/17 expand and improve upon the BPS:12/14 methods in three key aspects:
Refined targeting of nonresponding sample members so that, instead of attempting to reduce unit nonresponse bias for national estimates only, as in BPS:12/14, the impact of unit nonresponse on the bias is reduced for estimates within institutional sector.
Addition of a special data collection protocol for a hard-to-convert group: NPSAS:12 study member double-interview nonrespondents.
Inclusion of a randomized evaluation designed to permit estimating the difference between unit nonresponse bias arising from application of the proposed responsive design methods and unit nonresponse bias arising from not applying the responsive design methods.
As noted previously, the responsive design approach for the BPS:12/14 full scale included (1) use of an incentive calibration study sample to identify optimal monetary incentives, (2) development of an importance measure for identifying nonrespondents for specific interventions, and (3) implementation of a multi-phase data collection period. Analysis of the BPS:12/14 case targeting indicated that institution sector dominated the construction of the importance scores; meaning that nonrespondents were primarily selected by identifying nonrespondents in the sectors with the lowest response rates. For the BPS:12/17 full scale we are building upon the BPS:12/14 full scale responsive design but, rather than selecting nonrespondents using the same approach as in BPS:12/14, we propose targeting nonrespondents within:
Institution Sector – we will model and target cases within sector groups in an effort to equalize response rates across sectors.
NPSAS:12 study member double interview nonrespondents – we will use a calibration sample to evaluate two special data collection protocols for this hard-to-convert group, including a special baseline protocol determined by a calibration sample and an accelerated timeline.
We have designed an evaluation of the responsive design so that we can test the impact of the targeted interventions to reduce nonresponse bias versus not targeting for interventions. For the evaluation, we will select a random subset of all sample members to be pulled aside as a control sample that will not be eligible for intervention targeting. The remaining sample member cases will be referred to as the treatment sample and the targeting methods will be applied to that group.
In the following sections, we will describe the proposed importance measure, sector grouping, and intervention targeting, then describe the approach for the pre-paid and double nonrespondent calibration experiments, and outline how these will be implemented and evaluated in the BPS:12/17 full scale data collection.
The importance measure. In order to reduce nonresponse bias in survey variables by directing effort and resources during data collection, and to minimize the cost associated with achieving this goal, three related conditions have to be met: (1) the targeted cases must be drawn from groups that are under-represented on key survey variable values among those who already responded, (2) their likelihood of participation should not be excessively low or high (i.e., targeted cases who do not respond cannot decrease bias; targeting only high propensity cases can potentially increase the bias of estimates), and (3) targeted cases should be numerous enough to impact survey estimates within domains of interest. While targeting cases based on response propensities may reduce nonresponse bias, bias may be unaffected if the targeted cases are extremely difficult to convert and do not respond to the intervention as desired.
One approach to meeting these conditions is to target cases based on two dimensions: the likelihood of a case to contribute to nonresponse bias if not interviewed, and the likelihood that the case could be converted to a respondent. These dimensions form an importance score, such that:
Where I is the calculated importance score, is a measure of under-representativeness on key variables that reflects their likelihood to induce bias if not converted, and is the predicted final response propensity, across sample members and data collection phases with responsive design interventions.
The importance score will be determined by the combination of two models: a response propensity model and a bias-likelihood model. Like BPS:12/14, the response propensity component of the importance score is being calculated in advance of the start of data collection. The representativeness of key variables, however, can only be determined during specific phases of the BPS:12/17 data collection, with terms tailored to BPS:12/17. The importance score calculation needs to balance two distinct scenarios: (1) low propensity cases that will likely never respond, irrespective of their underrepresentation, and (2) high propensity cases that, because they are not underrepresented in the data, are unlikely to reduce bias. Once in production, NCES will provide more information about the distribution of both propensity and representation from the BPS:12/17 calibration study, which will allow us to explore linear and nonlinear functions that optimize the potential for nonresponse bias and available incentive resources. We will share the findings with OMB at that time.
Bias-likelihood (U) model. A desirable model to identify cases to be targeted for intervention would use covariates (Z) that are strongly related to the survey variables of interest (Y), to identify sample members who are under-represented (using a response indicator, R) with regard to these covariates. We then have the following relationships, using a single Z and Y for illustration:
Z
R Y
Nonresponse bias arises when there is a relationship between R and Y. Just as in adjustment for nonresponse bias (see Little and Vartivarian, 2005), a Z-variable cannot be effective in nonresponse bias reduction if corr(Z,Y) is weak or nonexistent, even if corr(Z,R) is substantial. That is, selection of Z-variables based only on their correlation with R may not help to identify cases that contribute to nonresponse bias. The goal is to identify sample cases that have Y-variable values that are associated with lower response rates, as this is one of the most direct ways to reduce nonresponse bias in an estimate of a mean.
The key Z-variable selection criterion should then be association with Y. Good candidate Z-variables would be the Y-variables or their proxies measured in a prior wave and any correlates of change in estimates over time. A second set of useful Z-variables would be those used in weighting and those used to define subdomains for analysis – such as demographic variables. This should help to reduce the variance inflation due to weighting and nonresponse bias in comparisons across groups. Key, however, is the exclusion of variables that are highly predictive of R, but quite unrelated to Y. These variables, such as the number of prior contact attempts and prior refusal, can dominate in a model predicting the likelihood of participation and mask the relationship of Z variables that are associated with Y.
Prior to the start of later phases of data, when the treatment interventions will be introduced, we will conduct multiple logistic regressions in order to predict the survey outcome (R) through the current phase of collection using only substantive and demographic variables and their correlates from NPSAS:12 and the sampling frame (Z), and select two-way interactions. For each sector grouping (see table 8 below), a single model will be fit. The goal of this model is not to maximize the ability to predict survey response ( ), but to obtain a predicted likelihood of a completed interview reducing nonresponse bias if successfully interviewed. Because of this key difference, we use (1 – ) to calculate a case-level prediction representing bias-likelihood, rather than response propensity.
Variables to be used in the bias-likelihood model will come from base-year survey responses, institution characteristics, and sampling frame information3 (see table 7). It is important to note that paradata, particularly those variables that are highly predictive of response, but quite unrelated to the survey variables of interest, will be excluded from the bias-likelihood model.
Table 7. Candidate variables for the bias likelihood model
Variables |
|
Race Gender Age Sector* Match to Central Processing System Match to Pell grant system Total income Parent’s highest education level |
Attendance intensity Highest level of education ever expected Dependent children and marital status Federal Pell grant amount Direct subsidized and unsubsidized loans Total federal aid Institutional aid total Degree program |
* Variable to be included in bias likelihood model for targeting sample members from public 4-year and private nonprofit institutions (sector group B in table 8).
Response propensity (P(R)) model. Prior to the start of BPS:12/17 data collection, a response propensity model is being developed to predict likelihood to respond to BPS:12/17 based on BPS:12/14 data and response behavior. NCES will share the model with OMB when finalized and prior to implementation. The model will use variables from the base NPSAS:12 study as well as BPS:12/14 full scale that have been shown to predict survey response, including, but not limited to:
responded during early completion period,
response history,
interview mode (web/telephone),
ever refused,
incentive amount offered,
age,
gender,
citizenship,
institution sector,
call count, and
tracing/locating status (located/required intensive tracing).
We will use BPS:12/14 full scale data to create this response propensity model as that study was similar in design and population to the current BPS:12/17 full scale study (note that BPS:12/17 did not have a field test that could be leveraged, and the pilot study was too limited in size and dissimilar in approach and population to be useful for this purpose).
Targeted interventions. In the BPS:12/14 responsive design approach, institution sector was the largest factor in determining current response status. For BPS:12/17 full scale, individuals will be targeted within groupings of institution sectors in an effort to equalize response rates across the sector groups. Designed to reduce the final unequal weighting effect, targeting within the groups will allow us to fit a different propensity or bias likelihood model for each group while equalizing response rates across groups.
Targeting within sector groups is designed to reduce nonresponse bias within specific sectors rather than across the aggregate target population. The five sector groupings (Table 8) were constructed by first identifying sectors with historically low response rates, as observed in BPS:12/14 and NPSAS:12, and, second, assigning the sectors with the lowest participation to their own groups. The remaining sectors were then combined into groups consisting of multiple sectors. The private for profit sectors (groups C, D, and E) were identified to have low response rates. Public less-than-2-year and public 2-year institutions (group A) were combined as they were similar, and because the public less-than-2-year sector was too small to act as a distinct group. Public 4-year and private nonprofit institutions (sector group B) remained combined as they have not historically exhibited low response rates (nonetheless, cases within this sector group are still eligible for targeting; the targeting model for sector group B will include sector as a term to account for differences between the sectors).
Table 8. Targeted sector groups
Sector Group |
Sectors |
Sample Count |
A |
1: Public less-than-2-year 2: Public 2-year |
205 10,142 |
B |
3: Public 4-year non-doctorate-granting 4: Public 4-year doctorate-granting 5: Private nonprofit less than 4-year 6: Private nonprofit 4-year nondoctorate 7: Private nonprofit 4-year doctorate-granting |
1,829 3,398 334 2,283 2,602 |
C |
8: Private for profit less-than-2-year |
1,463 |
D |
9: Private for profit 2-year |
3,132 |
E |
10: Private for profit 4-year |
8,340 |
All NPSAS:12 study members who responded to the NPSAS:12 or BPS:12/14 student interviews (hereafter called previous respondents) will be initially offered a $30 incentive, determined to be an optimal baseline incentive offer during the BPS:12/14 Phase 1 experiment with the calibration sample. Following the $30 baseline offer, two different targeted interventions will be utilized for the BPS:12/17 responsive design approach:
First Intervention (Incentive Boost): Targeted cases will be offered an additional $45 over an individual’s baseline incentive amount. The $45 amount is based on the amount identified as optimal during Phase 3 of the BPS:12/14 calibration experiment.
Second Intervention (Abbreviated Interview): Targeted cases will be offered an abbreviated interview at 21 weeks (note that all cases will be offered abbreviated interview at 31 weeks).
Before each targeted intervention, predicted bias-likelihood values and composite propensity scores will be calculated for all interview nonrespondents. The product of the bias-likelihood and response propensity will be used to calculate the target importance score described above. Propensity scores above high and low cutoffs, determined by a review of the predicted distribution, will be excluded as potential targets during data collection4.
Pre-paid calibration experiment. It is widely accepted that survey response rates have been in decline in the last decade. Incentives, and in particular prepaid incentives, can often help maximize participation. BPS will test a prepaid incentive, delivered electronically in the form of a PayPal5 payment, to selected sample members. Prior to the start of full-scale data collection, 2,970 members of the previous respondent main sample will be identified to participate in a calibration study to evaluate the effectiveness of the pre-paid PayPal offer. At the conclusion of this randomized calibration study, NCES will meet with OMB to discuss the results of the experiment and to seek OMB approval through a change request for the pre-paid offer for the remaining nonrespondent sample. Half of the calibration sample will receive a $10 pre-paid PayPal amount and an offer to receive another $20 upon completion of the survey ($30 total). The other half will receive an offer for $30 upon completion of the survey with no pre-paid amount. At six weeks the response rates for the two approaches will be compared to determine if the pre-paid offer should be extended to the main sample. For all monetary incentives, including prepayments, sample members have the option of receiving disbursements through PayPal or in the form of a check.
During the calibration phase in March 2017, PayPal compliance notified RTI that three sample members designated to be given a pre-paid incentive were flagged as persons possibly sanctioned by U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC). To comply with OFAC sanctions and to ensure the BPS:12/17 PayPal account remained in good standing, RTI began implementing methods to identify sample members who may match to those listed on OFAC’s Specially Designated Nationals and Blocked Persons (SDN) list. Programmatic matching, using methods recommended in the OFAC’s Web-based Sanction List Search tool, was performed on the entire BPS:12/17 fielded sample (n=33,750). This matching process resulted in 345 potential matches between the BPS:12/17 sample and the SDN list. The 345 cases were manually reviewed against additional sources of information. Of these cases, 315 individuals were ruled out as matches to individuals on the OFAC SDN list. The remaining 30 cases in the main and calibration samples could not be ruled out as sanctioned individuals. To comply with OFAC requirements and to avoid compliance issues with PayPal, the 315 individuals will be offered an incentive by check only. The remaining 30 cases will not be fielded in BPS:12/17 and will be excluded from the survey. Of those excluded, 3 cases were in the calibration sample, and 27 cases were in the main sample.
After the calibration experiment detailed above, the calibration sample will join with the main sample to continue data collection efforts. These are described in detail below, summarized in table 9, and their timeline is shown graphically in figure 1.
Table 9. Timeline for previous respondents
Phase |
Start date |
Activity |
||
|
Calibration sample |
Main sample |
Calibration sample |
Main sample |
PR-1 |
Week 0 |
Week 7 |
Begin data collection; calibration sample for $10 pre-paid offer versus no pre-paid offer |
Begin data collection; make decision on implementation of pre-paid offer based on results of calibration |
PR-2 |
Week 14 |
Week 14 |
Target treatment cases for incentive boost |
Target treatment cases for incentive boost |
PR-3 |
Week 21 |
Week 21 |
Target treatment cases for early abbreviated interview |
Target treatment cases for early abbreviated interview |
PR-4 |
Week 31 |
Week 31 |
Abbreviated
interview for |
Abbreviated
interview for |
Special data collection protocol for double nonrespondents. Approximately 3,280 sample members (group 4 in table 2) had sufficient information in NPSAS:12 to be classified as a NPSAS:12 study member but have neither responded to the NPSAS:12 student interview nor the BPS:12/14 student interview (henceforth referred to as double nonrespondents). In planning for the BPS:12/17 collection, we investigated characteristics known about this group, such as the distribution across sectors, our ability to locate them in prior rounds, and their estimated attendance and course-taking patterns using PETS:09. We found that while this group constitutes approximately 10 percent of the sample, 58 percent of double nonrespondents were enrolled within the private for-profit sectors in NPSAS:12. We found that over three-quarters of double nonrespondents had been contacted—yet had not responded. We also found, using a proxy from the BPS:04 cohort, that double nonrespondents differed by several characteristics of prime interest to BPS, such as postsecondary enrollment and coursetaking patterns. We concluded that double nonrespondents could contribute to nonresponse bias, particularly in the private for-profit sector.
While we were able to locate approximately three-quarters of these double nonrespondents in prior data collections, we do not know their reasons for refusing to participate. Without knowing the reasons for refusal, the optimal incentives are difficult to determine. In BPS:12/14, due to the design of the importance score which excluded the lowest propensity cases, nonrespondents who were the most difficult to convert were not included in intervention targeting. As a result, very few of the double nonrespondents were ever exposed to incentive boosts or early abbreviated interviews in an attempt to convert them. In fact, after examining BPS:12/14 data, we found that less than 0.1 percent were offered more than $50 dollars and only 3.6 percent were offered more than $30. Similarly, we do not know if a shortened abbreviated interview would improve response rates for this group. Therefore, we propose a calibration sample with an experimental design that evaluates the efficacy of additional incentive versus a shorter interview. The results of the experiment will inform the main sample.
Specifically, we propose fielding a calibration sample, consisting of 869 double nonrespondents, seven weeks ahead of the main sample to evaluate the two special data collection protocols for this hard-to-convert group: a shortened interview vs. a monetary incentive. A randomly-selected half of the calibration sample will be offered an abbreviated interview along with a $10 pre-paid PayPal amount and an offer to receive another $20 upon completion of the survey ($30 total). The other half will be offered the full interview along with a $10 pre-paid PayPal amount6 and an offer to receive another $65 upon completion of the survey ($75 total). At six weeks, the two approaches will be compared using a Pearson Chi-squared test to determine which results in the highest response rate from this hard-to-convert population and should be proposed for the main sample of double nonrespondents. If both perform equally, we will select the $30 total baseline along with the abbreviated interview. Regardless of the selected protocol, at 14 weeks into data collection, all remaining nonrespondents in the double nonrespondent population will be offered the maximum special protocol intervention consisting of an abbreviated interview and $65 upon completion of the interview to total $75 along with the $10 pre-paid offer. In addition, at a later phase of data collection, we will move this group to a passive status by discontinuing CATI operations and relying on email contacts. The timeline for double nonrespondents is summarized in table 10 and figure 1.
Table 10. Timeline for NPSAS:12 study member double interview nonrespondents
Phase |
Start date |
Activity |
|||
|
Calibration sample |
Main sample |
Calibration sample |
Main sample |
|
DNR-1 |
Week 0 |
Week 7 |
Begin data collection; calibration sample for baseline special protocol (full interview and $75 total vs. abbreviated interview and $30 total) |
Begin data collection; baseline special protocol determined by calibration results (full interview and $75 total vs. abbreviated interview and $30 total) |
|
DNR-2 |
Week 14 |
Week 14 |
Offer all remaining double nonrespondents $75 incentive and abbreviated interview |
Offer all remaining double nonrespondents $75 incentive and abbreviated interview |
|
DNR-3 |
Week TBD |
Week TBD |
Move to passive data collection efforts for all remaining nonrespondents; time determined bases on sample monitoring |
Move to passive data collection efforts for all remaining nonrespondents; time determined bases on sample monitoring |
Figure 1. Data collection timeline
Evaluation of the BPS:12/17 Responsive Design Effort. The analysis plan is based upon two premises: (1) offering special interventions to some, targeted, sample members will increase participation in the aggregate for those sample members and (2) increasing participation among the targeted sample members will produce estimates with lower bias than if no targeting were implemented. In an effort to maximize the utility of this research, the analysis of the responsive design and its implementation will be described in a technical report that includes these two topics and their related hypotheses described below. We intend to examine these aspects of the BPS:12/17 responsive design and its implementation as follows:
Evaluate the effectiveness of the calibration samples in identifying optimal intervention approaches to increase participation.
A key component of the BPS:12/17 responsive design is the effectiveness of the changes in survey protocol for increasing participation. The two calibration experiments examine the impact of proposed features – a pre-paid PayPal offer for previous respondents and two special protocols for double nonrespondents.
Evaluation of the experiments with calibration samples will occur during data collection so that findings can be implemented in the main sample data collection. Approximately six weeks after the start of data collection for the calibration sample, response rates for the calibration pre-paid offer group versus the no pre-paid offer group for previous respondents will be compared using a Pearson chi-square test. Similarly the double nonrespondent group receiving the abbreviated interview plus the total $30 offer will be compared to the group receiving the abbreviated interview plus the total $75 offer.
Evaluate sector group level models used to target cases for special interventions
To maximize the effectiveness of the BPS:12/17 responsive design approach, targeted cases need to be associated with survey responses that are underrepresented among the respondents, and the targeted groups need to be large enough to change observed estimates. In addition to assessing model fit metrics and the effective identification of cases contributing to nonresponse bias for each of the models used in the importance score calculation, the distributions of the targeted cases will be reviewed for key variables, overall and within sector, prior to identifying final targeted cases. Again, these key variables include base-year survey responses, institution characteristics, and sampling frame information as shown in table 7. During data collection, these reviews will help ensure that the cases most likely to decrease bias are targeted and that project resources are used efficiently. After data collection, similar summaries will be used to describe the composition of the targeted cases along dimensions of interest.
The importance score used to select targeted cases will be calculated based on both the nonresponse bias potential and on an a priori response propensity score. To evaluate how well the response propensity measure predicted actual response, we will compare the predicted response rates to observed response rates at the conclusion of data collection. These comparisons will be made at the sector group level as well as in aggregate.
Evaluate the ability of the targeted interventions to reduce unit nonresponse bias through increased participation.
To test the impact of the targeted interventions to reduce nonresponse bias versus not targeting for interventions, we require a set of similar cases that are held aside from the targeting process. A random subset of all sample members will be pulled aside as a control sample that is not eligible for intervention targeting. The remaining sample member cases will be referred to as the treatment sample and the targeting methods will be applied to that group. Sample members will be randomly assigned to the control group within each of the five sector groups. In all, the control group will be composed of 3,606 individuals (approximately 721 per sector group) who form nearly 11 percent of the total fielded sample.
For evaluation purposes, the targeted interventions will be the only difference between the control and treatment samples. Therefore both the control and treatment samples will consist of previous round respondents and double nonrespondents, and they will both be involved in the calibration samples and will both follow the same data collection timelines.
The frame, administrative, and prior-round data used in determining cases to target for unit nonresponse bias reduction can, in turn, be used to evaluate (1) unit nonresponse bias in the final estimates and (2) changes in unit nonresponse bias over the course of data collection. Unweighted and weighted (using design weights) estimates of absolute nonresponse bias will be computed for each variable used in the models:
where is the respondent mean and is the full sample mean. Bias estimates will be calculated separately for treatment and control groups and statistically compared under the hypothesis that treatment interventions yields estimates with lower bias.
BPS:12/17 Responsive Design Research Questions. With the assumption that increasing the rate of response among targeted, underrepresented cases will reduce nonresponse bias, the BPS:12/17 responsive design experiment will explore the following research questions which may be stated in terms of a null hypothesis as follows:
Research question 1: Did the a priori response propensity model predict overall unweighted BPS:12/17 response?
H0: At the end of data collection, there will be no association between a priori propensity predictions and observed response rates.
Research question 2: Does one special protocol increase response rates for double nonrespondents versus the other protocol?
H0: At the end of the double nonrespondent calibration sample, there will be no difference in response rates between a $75 baseline offer along with a full interview and a $30 baseline offer along with an abbreviated interview.
Research question 3: Does a $10 pre-paid PayPal offer increase early response rates?
H0: At the end of the previous respondent calibration sample, there will be no difference in response rates between cases that receive a $10 pre-paid PayPal offer and those that do not.
Research question 4: Are targeted respondents different from non-targeted respondents on key variables?
H0: Right before the two targeted interventions, and the end of data collection, there will be no difference between targeted respondents and non-targeted and never-targeted respondents in weighted or unweighted estimates of key variables not included in the importance score calculation.
Research question 5: Did targeted cases respond at higher rates than non-targeted cases?
H0: At the end of the targeted interventions, and at the end of data collection, there will be no difference in weighted or unweighted response rates between the treatment sample and the control sample.
Research question 6: Did conversion of targeted cases reduce unit nonresponse bias?
H0: At the end of data collection, there will be no difference in absolute nonresponse bias of key estimates between the treatment and control samples.
Power calculations. The first step in the power analysis was to determine the number of sample members to allocate to the control and treatment groups. For each of the five institution sector groups, roughly 721 sample members will be randomly selected into the control group that will not be exposed to any targeted interventions. The remaining sample within each sector group will be assigned to the treatment group. We will then compare absolute measures of bias between the treatment and control groups under the hypothesis that the treatments, that is, the targeted interventions, reduce absolute bias. As we will be comparing absolute bias estimates, which range between zero and one, a power analysis was conducted using a one-sided, two-group chi-square test of equal proportions with unequal sample sizes in each group. The absolute bias estimates will be weighted and statistical comparisons will take into account the underlying BPS:12/17 sampling design; therefore, the power analysis assumes a relatively conservative design effect of 3. Table 11 shows the resulting power based on different assumptions for the base absolute bias estimates.
Table 11. Power for control versus treatment comparisons across multiple assumptions
|
|
|
Alpha |
0.05 |
0.05 |
0.05 |
|
|
|
|
Treatment Abs. Bias |
0.4 |
0.2 |
0.125 |
|
|
|
|
Control Abs. Bias |
0.5 |
0.3 |
0.2 |
|
Sector Group |
Sectors |
Total Count |
Control Sample |
Treatment Sample |
Unequal Group Power |
Unequal Group Power |
Unequal Group Power |
A |
1: Public less-than-2-year |
10,345 |
721 |
9,624 |
0.915 |
0.966 |
0.924 |
2: Public 2-year |
|||||||
B |
3: Public 4-year non-doctorate-granting |
10,445 |
721 |
9,724 |
0.916 |
0.966 |
0.924 |
4: Public 4-year doctorate-granting |
|||||||
5: Private nonprofit less than 4-year |
|||||||
6: Private nonprofit 4-year nondoctorate |
|||||||
7: Private nonprofit 4-year doctorate-granting |
|||||||
C |
8: Private for-profit less-than-2-year |
1,463 |
722 |
741 |
0.718 |
0.819 |
0.727 |
D |
9: Private for-profit 2-year |
3,132 |
721 |
2,411 |
0.864 |
0.935 |
0.876 |
E |
10: Private for-profit 4-year |
8,340 |
721 |
7,619 |
0.911 |
0.963 |
0.920 |
NOTE: After sampling for BPS:12/17, three cases were determined to be deceased, and have been removed from the power calculations. In addition, this table does not include 30 cases that were excluded based on matching to the OFAC SDN list.
The final three columns of table 11 show how the power estimates vary depending upon the assumed values of the underlying absolute bias measures, and the third to last column specifically shows the worst case scenario where the bias measures are 50 percent. The overall control sample size is driven by sector group C which has the lowest available sample, and for some bias domains we may need to combine sectors C and D for analysis purposes. Given the sensitivity of power estimates to assumptions regarding the underlying treatment effect, there appears to be sufficient power to support the proposed calibration experiment across a wide range of possible scenarios.
After the assignment of sample members to treatment and control groups, we will construct the two calibration samples: 1) previous respondents and 2) double nonrespondents. The calibration sample of previous respondents (n=2,970) will be randomly split into two groups, with 1,486 sample members in the treatment group and 1,484 in the control group7. One group will receive a $10 pre-paid offer while the other will not receive the pre-paid offer. For a power of 0.80, a confidence level of 95 percent, and given the sample within each condition, the experiment of pre-paid amounts should detect a 5.0 percentage point difference in response rate using Pearson’s chi-square8. This power calculation assumes a two-sided test of proportions as we are uncertain of the effect of offering the pre-paid incentive. In addition, for the power calculation, an initial baseline response rate of 36.0 percent was selected given that this was the observed response rate after six weeks of data collection for the BPS:12/14 full scale study.
Similarly, we will randomly split the calibration sample of double nonrespondents (n=869)9 into two groups where one group (n=435) will receive a $30 baseline offer with an abbreviated interview, while the other group (n=434) will be offered the full interview and a total of $75. Using a power of 0.80, a confidence level of 95 percent, and given sample available within each condition, the experiment among double nonrespondents should detect a 5.0 percent point difference in response rate using Pearson’s chi-square. This power calculation assumes a two-sided test of proportions, as we have no prior data for which special protocol will perform better with respect to response rate. For this calculation, we assumed the six week response rate for the protocol with the lower response rate would be 5.0 percent. For a test between two proportions, the power to detect a difference is dependent on the initially assumed baseline response rate. In the proposed scenario, with a one-sided test, the baseline response rate would be the response rate for the approach with the lower response rate at six weeks. We assumed that this response rate would be 5% for power calculations. If the six week response rate differs from 5% then the power of the test to detect a 5% difference in response rates would fluctuate as shown in table 12 below.
Table 12. Power to detect 5 percent difference in response rate based on different baseline response rates
Alpha 0.05, Sample Size per group = 435, Detectable Difference 5% |
|
Assumed Response Rate |
Power to Detect Difference |
1% |
98% |
5% |
80% |
10% |
61% |
Attachment B. A priori propensity and bias-likliehood model variables
A priori propensity model variables
Analysis of Maximum Likelihood Estimates |
||||||
Parameter |
|
DF |
Estimate |
Standard Error |
Wald Chi-Square |
Pr > ChiSq |
Intercept |
|
1 |
1.6798 |
0.0822 |
418.0648 |
<.0001 |
AGE |
|
1 |
-0.0168 |
0.00210 |
64.5708 |
<.0001 |
GENDER |
2 |
1 |
0.1411 |
0.0290 |
23.7028 |
<.0001 |
citizen |
2 |
1 |
0.1890 |
0.0882 |
4.5928 |
0.0321 |
citizen |
3 |
1 |
-1.0143 |
0.1245 |
66.4115 |
<.0001 |
citizen |
-9 |
1 |
-0.4604 |
0.0856 |
28.9074 |
<.0001 |
emailcount |
|
1 |
0.0926 |
0.0234 |
15.6244 |
<.0001 |
early_comp |
1 |
1 |
0.5189 |
0.0321 |
261.1316 |
<.0001 |
ever_any_ref |
1 |
1 |
-0.2182 |
0.0496 |
19.3658 |
<.0001 |
SECTOR10 |
1 |
1 |
0.0792 |
0.1674 |
0.2237 |
0.6362 |
SECTOR10 |
3 |
1 |
-0.0405 |
0.0691 |
0.3437 |
0.5577 |
SECTOR10 |
4 |
1 |
0.1681 |
0.0593 |
8.0302 |
0.0046 |
SECTOR10 |
5 |
1 |
-0.2284 |
0.1363 |
2.8074 |
0.0938 |
SECTOR10 |
6 |
1 |
0.0930 |
0.0667 |
1.9457 |
0.1631 |
SECTOR10 |
7 |
1 |
0.3833 |
0.0691 |
30.7782 |
<.0001 |
SECTOR10 |
8 |
1 |
-0.4017 |
0.0695 |
33.3545 |
<.0001 |
SECTOR10 |
9 |
1 |
0.0431 |
0.0520 |
0.6859 |
0.4076 |
SECTOR10 |
10 |
1 |
0.00710 |
0.0382 |
0.0345 |
0.8526 |
NPSAS12RESP |
0 |
1 |
-2.4365 |
0.0605 |
1623.6972 |
<.0001 |
addr_update |
1 |
1 |
1.6727 |
0.1045 |
256.2360 |
<.0001 |
inc_amount |
|
1 |
-0.0128 |
0.000814 |
245.9575 |
<.0001 |
leaf21 |
1 |
1 |
0.5232 |
0.1144 |
20.9059 |
<.0001 |
leaf32 |
1 |
1 |
-0.6254 |
0.0793 |
62.2445 |
<.0001 |
1 - Leaf2 = Interaction of NPSAS12RESP (0-nonrespondent), Age (<20), Sector10 (4,6,7)
2 - Leaf3 = Interaction of NPSAS12RESP (0-nonrespondent), Age (>=20), Addre_update (0-not updated)
Bias-likelihood model variables, institution groups A-E
Bias-likelihood maximum likelihood model variables: institution group A (as of 8/14/17)
Parameter |
Level |
Estimate |
Standard |
Wald |
Pr > ChiSq |
Intercept |
|
-0.3845 |
0.1006 |
14.6084 |
0.0001 |
RACE |
2 |
0.0693 |
0.0625 |
1.2297 |
0.2675 |
RACE |
3 |
0.0549 |
0.0593 |
0.8598 |
0.3538 |
RACE |
4 |
0.1198 |
0.1089 |
1.2120 |
0.2709 |
RACE |
5 |
-0.3037 |
0.2329 |
1.7001 |
0.1923 |
RACE |
6 |
-0.2023 |
0.2972 |
0.4634 |
0.4960 |
RACE |
7 |
0.0358 |
0.1154 |
0.0964 |
0.7562 |
GENDER |
2 |
0.3571 |
0.0431 |
68.7732 |
<.0001 |
AGE3 |
2 |
-0.0220 |
0.0584 |
0.1423 |
0.7060 |
AGE3 |
3 |
0.0785 |
0.0875 |
0.8051 |
0.3696 |
INCPS_n |
0 |
-0.1489 |
0.0648 |
5.2885 |
0.0215 |
PELLMATCH_n |
0 |
-0.1145 |
0.1100 |
1.0826 |
0.2981 |
CINCOME4 |
2 |
0.00716 |
0.0593 |
0.0146 |
0.9039 |
CINCOME4 |
3 |
0.1143 |
0.0639 |
3.2029 |
0.0735 |
CINCOME4 |
4 |
0.1585 |
0.0831 |
3.6367 |
0.0565 |
PAREDUC |
0 |
-0.00347 |
0.1050 |
0.0011 |
0.9736 |
PAREDUC |
1 |
0.1085 |
0.0785 |
1.9116 |
0.1668 |
PAREDUC |
3 |
0.2172 |
0.1026 |
4.4814 |
0.0343 |
PAREDUC |
4 |
0.0825 |
0.0797 |
1.0720 |
0.3005 |
PAREDUC |
5 |
0.1832 |
0.0642 |
8.1366 |
0.0043 |
PAREDUC |
6 |
0.0615 |
0.0682 |
0.8136 |
0.3670 |
PAREDUC |
7 |
0.0534 |
0.0908 |
0.3459 |
0.5564 |
PAREDUC |
8 |
0.2536 |
0.2069 |
1.5022 |
0.2203 |
PAREDUC |
9 |
0.2299 |
0.2339 |
0.9660 |
0.3257 |
ATTNPTRN |
2 |
-0.00783 |
0.0531 |
0.0218 |
0.8826 |
ATTNPTRN |
3 |
0.0149 |
0.0558 |
0.0715 |
0.7892 |
HIGHLVEX |
1 |
-0.5805 |
0.5151 |
1.2701 |
0.2598 |
HIGHLVEX |
2 |
-0.0318 |
0.1209 |
0.0693 |
0.7923 |
HIGHLVEX |
4 |
0.0497 |
0.0544 |
0.8361 |
0.3605 |
HIGHLVEX |
6 |
0.1194 |
0.0644 |
3.4371 |
0.0637 |
HIGHLVEX |
7 |
0.0932 |
0.1003 |
0.8644 |
0.3525 |
HIGHLVEX |
8 |
0.2913 |
0.1125 |
6.6984 |
0.0097 |
DEPNUMCH3 |
1 |
-0.1463 |
0.0899 |
2.6462 |
0.1038 |
DEPNUMCH3 |
2 |
-0.2160 |
0.1060 |
4.1498 |
0.0416 |
DEPNUMCH3 |
3 |
-0.4333 |
0.1292 |
11.2423 |
0.0008 |
pellamt3 |
0 |
0.1885 |
0.1150 |
2.6842 |
0.1013 |
pellamt3 |
2 |
0.1281 |
0.0636 |
4.0639 |
0.0438 |
totaid4 |
0 |
-0.4885 |
0.0888 |
30.2298 |
<.0001 |
totaid4 |
2 |
-0.0754 |
0.0685 |
1.2124 |
0.2709 |
totaid4 |
3 |
-0.0312 |
0.1106 |
0.0794 |
0.7781 |
totaid4 |
4 |
0.0277 |
0.2192 |
0.0160 |
0.8995 |
tfedaid4 |
0 |
0.4373 |
0.1145 |
14.5925 |
0.0001 |
tfedaid4 |
2 |
0.0437 |
0.0709 |
0.3798 |
0.5377 |
tfedaid4 |
3 |
0.0609 |
0.0995 |
0.3749 |
0.5404 |
tfedaid4 |
4 |
0.0854 |
0.1692 |
0.2548 |
0.6137 |
instamt3 |
0 |
0 |
. |
. |
. |
instamt3 |
2 |
0.1047 |
0.0667 |
2.4624 |
0.1166 |
instamt3 |
3 |
0.2330 |
0.4221 |
0.3046 |
0.5810 |
UGDEG |
1 |
0.00245 |
0.0837 |
0.0009 |
0.9766 |
UGDEG |
3 |
0.0874 |
0.1156 |
0.5711 |
0.4498 |
UGDEG |
4 |
0.1681 |
0.1646 |
1.0422 |
0.3073 |
JOBHOUR4 |
1 |
0.1270 |
0.0650 |
3.8098 |
0.0510 |
JOBHOUR4 |
3 |
0.0658 |
0.0620 |
1.1250 |
0.2889 |
JOBHOUR4 |
4 |
-0.0354 |
0.0666 |
0.2822 |
0.5953 |
Bias-likelihood maximum likelihood model variables: institution group B (as of 8/14/17)
Parameter |
Level |
Estimate |
Standard |
Wald |
Pr > ChiSq |
Intercept |
|
-0.2923 |
0.1941 |
2.2681 |
0.1321 |
RACE |
2 |
-0.1282 |
0.0698 |
3.3675 |
0.0665 |
RACE |
3 |
-0.0534 |
0.0670 |
0.6359 |
0.4252 |
RACE |
4 |
-0.0595 |
0.0862 |
0.4759 |
0.4903 |
RACE |
5 |
-0.1594 |
0.2247 |
0.5031 |
0.4781 |
RACE |
6 |
-0.8792 |
0.3570 |
6.0662 |
0.0138 |
RACE |
7 |
-0.1319 |
0.1033 |
1.6307 |
0.2016 |
GENDER |
2 |
0.2587 |
0.0427 |
36.7241 |
<.0001 |
AGE3 |
2 |
-0.3970 |
0.1058 |
14.0764 |
0.0002 |
AGE3 |
3 |
-0.2247 |
0.1818 |
1.5285 |
0.2163 |
INCPS_n |
0 |
-0.1892 |
0.0677 |
7.8006 |
0.0052 |
PELLMATCH_n |
0 |
-0.0737 |
0.1202 |
0.3754 |
0.5401 |
CINCOME4 |
2 |
-0.00117 |
0.0865 |
0.0002 |
0.9892 |
CINCOME4 |
3 |
0.1139 |
0.0869 |
1.7183 |
0.1899 |
CINCOME4 |
4 |
0.1037 |
0.1011 |
1.0530 |
0.3048 |
PAREDUC |
0 |
0.0398 |
0.1519 |
0.0685 |
0.7935 |
PAREDUC |
1 |
0.2103 |
0.1237 |
2.8917 |
0.0890 |
PAREDUC |
3 |
0.1165 |
0.1260 |
0.8548 |
0.3552 |
PAREDUC |
4 |
0.2249 |
0.0958 |
5.5185 |
0.0188 |
PAREDUC |
5 |
0.1442 |
0.0788 |
3.3486 |
0.0673 |
PAREDUC |
6 |
0.1358 |
0.0703 |
3.7292 |
0.0535 |
PAREDUC |
7 |
0.2627 |
0.0785 |
11.2124 |
0.0008 |
PAREDUC |
8 |
0.3157 |
0.1108 |
8.1164 |
0.0044 |
PAREDUC |
9 |
0.1590 |
0.1193 |
1.7776 |
0.1824 |
ATTNPTRN |
2 |
0.1178 |
0.1185 |
0.9889 |
0.3200 |
ATTNPTRN |
3 |
0.00955 |
0.0666 |
0.0205 |
0.8861 |
HIGHLVEX |
1 |
0.7438 |
1.3223 |
0.3164 |
0.5738 |
HIGHLVEX |
2 |
0.2694 |
0.3162 |
0.7257 |
0.3943 |
HIGHLVEX |
4 |
0.0867 |
0.1654 |
0.2749 |
0.6000 |
HIGHLVEX |
6 |
0.2480 |
0.1678 |
2.1852 |
0.1393 |
HIGHLVEX |
7 |
0.2352 |
0.1754 |
1.7978 |
0.1800 |
HIGHLVEX |
8 |
0.3102 |
0.1761 |
3.1017 |
0.0782 |
DEPNUMCH3 |
1 |
-0.2236 |
0.1936 |
1.3339 |
0.2481 |
DEPNUMCH3 |
2 |
0.2110 |
0.2373 |
0.7907 |
0.3739 |
DEPNUMCH3 |
3 |
0.1550 |
0.2924 |
0.2809 |
0.5961 |
pellamt3 |
0 |
0.0880 |
0.1285 |
0.4694 |
0.4933 |
pellamt3 |
2 |
0.1347 |
0.0791 |
2.8986 |
0.0887 |
totaid4 |
0 |
-0.5845 |
0.1054 |
30.7213 |
<.0001 |
totaid4 |
2 |
0.0776 |
0.0902 |
0.7398 |
0.3897 |
totaid4 |
3 |
0.0426 |
0.0939 |
0.2058 |
0.6501 |
totaid4 |
4 |
0.0509 |
0.1024 |
0.2469 |
0.6193 |
tfedaid4 |
0 |
0.5971 |
0.1270 |
22.1063 |
<.0001 |
tfedaid4 |
2 |
0.0557 |
0.0761 |
0.5354 |
0.4644 |
tfedaid4 |
3 |
0.1138 |
0.0866 |
1.7271 |
0.1888 |
tfedaid4 |
4 |
-0.1194 |
0.0905 |
1.7379 |
0.1874 |
instamt3 |
0 |
0 |
. |
. |
. |
instamt3 |
2 |
0.0706 |
0.0636 |
1.2321 |
0.2670 |
instamt3 |
3 |
0.1112 |
0.0715 |
2.4198 |
0.1198 |
UGDEG |
1 |
0.0967 |
0.2140 |
0.2044 |
0.6512 |
UGDEG |
3 |
0.1642 |
0.1149 |
2.0409 |
0.1531 |
UGDEG |
4 |
0.2799 |
0.4374 |
0.4095 |
0.5222 |
JOBHOUR4 |
1 |
0.1592 |
0.0504 |
9.9808 |
0.0016 |
JOBHOUR4 |
3 |
-0.1225 |
0.1024 |
1.4307 |
0.2317 |
JOBHOUR4 |
4 |
-0.0556 |
0.1117 |
0.2477 |
0.6187 |
SECTOR10 |
4 |
0.0233 |
0.0661 |
0.1243 |
0.7244 |
SECTOR10 |
5 |
-0.3482 |
0.1560 |
4.9840 |
0.0256 |
SECTOR10 |
6 |
0.0284 |
0.0798 |
0.1272 |
0.7213 |
SECTOR10 |
7 |
0.0534 |
0.0774 |
0.4773 |
0.4896 |
Bias-likelihood maximum likelihood model variables: institution group C (as of 8/14/17)
Parameter |
Level |
Estimate |
Standard |
Wald |
Pr > ChiSq |
Intercept |
|
-1.4168 |
1.0465 |
1.8329 |
0.1758 |
RACE |
2 |
0.1699 |
0.1761 |
0.9309 |
0.3346 |
RACE |
3 |
0.1134 |
0.1579 |
0.5163 |
0.4724 |
RACE |
4 |
-0.0575 |
0.5179 |
0.0123 |
0.9116 |
RACE |
5 |
-0.3079 |
0.5712 |
0.2905 |
0.5899 |
RACE |
6 |
-0.3332 |
0.8967 |
0.1381 |
0.7102 |
RACE |
7 |
-0.1807 |
0.3429 |
0.2777 |
0.5982 |
GENDER |
2 |
0.5377 |
0.1638 |
10.7752 |
0.0010 |
AGE3 |
2 |
-0.1783 |
0.1518 |
1.3805 |
0.2400 |
AGE3 |
3 |
-0.0438 |
0.2139 |
0.0418 |
0.8379 |
INCPS_n |
0 |
-0.1774 |
0.3331 |
0.2836 |
0.5943 |
PELLMATCH_n |
0 |
0.3354 |
0.5229 |
0.4116 |
0.5212 |
CINCOME4 |
2 |
0.1836 |
0.1494 |
1.5096 |
0.2192 |
CINCOME4 |
3 |
0.2980 |
0.1879 |
2.5152 |
0.1128 |
CINCOME4 |
4 |
0.0427 |
0.3625 |
0.0139 |
0.9062 |
PAREDUC |
0 |
-0.3275 |
0.2842 |
1.3282 |
0.2491 |
PAREDUC |
1 |
0.1904 |
0.1871 |
1.0349 |
0.3090 |
PAREDUC |
3 |
0.0623 |
0.2657 |
0.0550 |
0.8146 |
PAREDUC |
4 |
0.2686 |
0.2806 |
0.9161 |
0.3385 |
PAREDUC |
5 |
0.5106 |
0.2069 |
6.0922 |
0.0136 |
PAREDUC |
6 |
0.1982 |
0.2321 |
0.7292 |
0.3931 |
PAREDUC |
7 |
-0.1274 |
0.3253 |
0.1534 |
0.6953 |
PAREDUC |
8 |
0.7845 |
0.4961 |
2.5006 |
0.1138 |
PAREDUC |
9 |
0.3036 |
0.8172 |
0.1380 |
0.7102 |
ATTNPTRN |
2 |
-0.0552 |
0.2624 |
0.0443 |
0.8333 |
ATTNPTRN |
3 |
0.0599 |
0.1974 |
0.0921 |
0.7615 |
HIGHLVEX |
1 |
13.3507 |
749.7 |
0.0003 |
0.9858 |
HIGHLVEX |
2 |
-0.00588 |
0.1733 |
0.0012 |
0.9729 |
HIGHLVEX |
4 |
-0.2448 |
0.2042 |
1.4368 |
0.2307 |
HIGHLVEX |
6 |
0.1184 |
0.2551 |
0.2153 |
0.6427 |
HIGHLVEX |
7 |
-0.1703 |
0.4625 |
0.1357 |
0.7126 |
HIGHLVEX |
8 |
0.3943 |
0.3553 |
1.2314 |
0.2671 |
DEPNUMCH3 |
1 |
0.1123 |
0.1747 |
0.4134 |
0.5202 |
DEPNUMCH3 |
2 |
-0.1385 |
0.2360 |
0.3441 |
0.5574 |
DEPNUMCH3 |
3 |
-0.3045 |
0.2723 |
1.2512 |
0.2633 |
pellamt3 |
0 |
-0.1567 |
0.4943 |
0.1005 |
0.7513 |
pellamt3 |
2 |
0.0579 |
0.1622 |
0.1275 |
0.7210 |
totaid4 |
0 |
-0.6851 |
0.3949 |
3.0090 |
0.0828 |
totaid4 |
2 |
-0.00800 |
0.3898 |
0.0004 |
0.9836 |
totaid4 |
3 |
-0.1095 |
0.4006 |
0.0747 |
0.7847 |
totaid4 |
4 |
-0.2901 |
0.4143 |
0.4903 |
0.4838 |
tfedaid4 |
0 |
1.0953 |
0.4726 |
5.3711 |
0.0205 |
tfedaid4 |
2 |
-0.2574 |
0.3902 |
0.4351 |
0.5095 |
tfedaid4 |
3 |
-0.0921 |
0.4194 |
0.0482 |
0.8262 |
tfedaid4 |
4 |
0.0689 |
0.4344 |
0.0252 |
0.8740 |
instamt3 |
0 |
0 |
. |
. |
. |
instamt3 |
2 |
0.1528 |
0.5050 |
0.0916 |
0.7622 |
instamt3 |
3 |
-12.7931 |
487.7 |
0.0007 |
0.9791 |
UGDEG |
1 |
0.5914 |
0.9686 |
0.3728 |
0.5415 |
UGDEG |
4 |
0.5803 |
1.6066 |
0.1305 |
0.7180 |
JOBHOUR4 |
1 |
0.4014 |
0.2207 |
3.3068 |
0.0690 |
JOBHOUR4 |
3 |
-0.00260 |
0.1952 |
0.0002 |
0.9894 |
JOBHOUR4 |
4 |
0.2984 |
0.2225 |
1.7995 |
0.1798 |
Bias-likelihood maximum likelihood model variables: institution group D (as of 8/14/17)
Parameter |
Level |
Estimate |
Standard |
Wald |
Pr > ChiSq |
Intercept |
|
-0.7176 |
0.2746 |
6.8285 |
0.0090 |
RACE |
2 |
0.1153 |
0.1257 |
0.8418 |
0.3589 |
RACE |
3 |
0.0498 |
0.1003 |
0.2458 |
0.6200 |
RACE |
4 |
0.6327 |
0.3466 |
3.3319 |
0.0679 |
RACE |
5 |
-0.3123 |
0.2971 |
1.1048 |
0.2932 |
RACE |
6 |
-0.00303 |
0.4160 |
0.0001 |
0.9942 |
RACE |
7 |
0.2292 |
0.2498 |
0.8418 |
0.3589 |
GENDER |
2 |
0.4138 |
0.0907 |
20.8269 |
<.0001 |
AGE3 |
2 |
-0.00269 |
0.0980 |
0.0008 |
0.9781 |
AGE3 |
3 |
-0.1036 |
0.1413 |
0.5375 |
0.4635 |
INCPS_n |
0 |
0.0690 |
0.2856 |
0.0583 |
0.8092 |
PELLMATCH_n |
0 |
0.3584 |
0.2809 |
1.6279 |
0.2020 |
CINCOME4 |
2 |
-0.0661 |
0.0995 |
0.4412 |
0.5066 |
CINCOME4 |
3 |
-0.0383 |
0.1209 |
0.1005 |
0.7513 |
CINCOME4 |
4 |
-0.1606 |
0.2137 |
0.5645 |
0.4525 |
PAREDUC |
0 |
0.0203 |
0.1636 |
0.0155 |
0.9010 |
PAREDUC |
1 |
0.0110 |
0.1249 |
0.0077 |
0.9301 |
PAREDUC |
3 |
-0.0206 |
0.1787 |
0.0133 |
0.9082 |
PAREDUC |
4 |
0.2302 |
0.1692 |
1.8495 |
0.1738 |
PAREDUC |
5 |
0.1014 |
0.1294 |
0.6135 |
0.4335 |
PAREDUC |
6 |
0.1468 |
0.1551 |
0.8968 |
0.3436 |
PAREDUC |
7 |
-0.0743 |
0.2300 |
0.1045 |
0.7465 |
PAREDUC |
8 |
-0.3262 |
0.3957 |
0.6797 |
0.4097 |
PAREDUC |
9 |
-0.1468 |
0.5303 |
0.0766 |
0.7820 |
ATTNPTRN |
2 |
-0.0628 |
0.1697 |
0.1370 |
0.7113 |
ATTNPTRN |
3 |
-0.0246 |
0.1250 |
0.0387 |
0.8440 |
HIGHLVEX |
1 |
26.4993 |
853.0 |
0.0010 |
0.9752 |
HIGHLVEX |
2 |
-0.0676 |
0.1171 |
0.3334 |
0.5637 |
HIGHLVEX |
4 |
-0.0519 |
0.1121 |
0.2144 |
0.6433 |
HIGHLVEX |
6 |
0.2259 |
0.1441 |
2.4564 |
0.1170 |
HIGHLVEX |
7 |
0.2277 |
0.2554 |
0.7944 |
0.3728 |
HIGHLVEX |
8 |
0.5223 |
0.2745 |
3.6216 |
0.0570 |
DEPNUMCH3 |
1 |
0.1631 |
0.1224 |
1.7766 |
0.1826 |
DEPNUMCH3 |
2 |
0.0499 |
0.1549 |
0.1038 |
0.7473 |
DEPNUMCH3 |
3 |
-0.1595 |
0.2019 |
0.6238 |
0.4296 |
pellamt3 |
0 |
-0.1771 |
0.2764 |
0.4103 |
0.5218 |
pellamt3 |
2 |
0.0173 |
0.1019 |
0.0290 |
0.8649 |
totaid4 |
0 |
-0.3177 |
0.2760 |
1.3252 |
0.2497 |
totaid4 |
2 |
-0.0447 |
0.2754 |
0.0264 |
0.8710 |
totaid4 |
3 |
-0.0597 |
0.2752 |
0.0471 |
0.8282 |
totaid4 |
4 |
0.000097 |
0.2840 |
0.0000 |
0.9997 |
tfedaid4 |
0 |
0.4897 |
0.3005 |
2.6568 |
0.1031 |
tfedaid4 |
2 |
0.2897 |
0.2735 |
1.1223 |
0.2894 |
tfedaid4 |
3 |
0.1887 |
0.2819 |
0.4481 |
0.5033 |
tfedaid4 |
4 |
0.2830 |
0.2857 |
0.9813 |
0.3219 |
instamt3 |
0 |
0 |
. |
. |
. |
instamt3 |
2 |
0.3487 |
0.2367 |
2.1710 |
0.1406 |
instamt3 |
3 |
0.1167 |
0.4916 |
0.0564 |
0.8124 |
UGDEG |
1 |
0.0602 |
0.1007 |
0.3573 |
0.5500 |
UGDEG |
3 |
-0.4435 |
0.4861 |
0.8322 |
0.3616 |
UGDEG |
4 |
-13.0098 |
422.3 |
0.0009 |
0.9754 |
JOBHOUR4 |
1 |
-0.00078 |
0.1515 |
0.0000 |
0.9959 |
JOBHOUR4 |
3 |
-0.0760 |
0.1304 |
0.3396 |
0.5601 |
JOBHOUR4 |
4 |
0.0672 |
0.1264 |
0.2825 |
0.5951 |
Bias-likelihood maximum likelihood model variables: institution group E (as of 8/14/17)
Parameter |
Level |
Estimate |
Standard |
Wald |
Pr > ChiSq |
Intercept |
|
-0.4846 |
0.1547 |
9.8087 |
0.0017 |
RACE |
2 |
0.0918 |
0.0634 |
2.0950 |
0.1478 |
RACE |
3 |
0.0511 |
0.0645 |
0.6284 |
0.4279 |
RACE |
4 |
-0.0693 |
0.1647 |
0.1772 |
0.6738 |
RACE |
5 |
0.2422 |
0.2233 |
1.1768 |
0.2780 |
RACE |
6 |
-0.1318 |
0.2705 |
0.2373 |
0.6261 |
RACE |
7 |
0.2317 |
0.1233 |
3.5297 |
0.0603 |
GENDER |
2 |
0.3211 |
0.0522 |
37.8740 |
<.0001 |
AGE3 |
2 |
-0.0242 |
0.0637 |
0.1442 |
0.7042 |
AGE3 |
3 |
-0.0532 |
0.0820 |
0.4216 |
0.5161 |
INCPS_n |
0 |
-0.2038 |
0.1455 |
1.9616 |
0.1613 |
PELLMATCH_n |
0 |
-0.0312 |
0.1426 |
0.0479 |
0.8268 |
CINCOME4 |
2 |
0.1154 |
0.0604 |
3.6550 |
0.0559 |
CINCOME4 |
3 |
0.1056 |
0.0709 |
2.2154 |
0.1366 |
CINCOME4 |
4 |
0.1362 |
0.1166 |
1.3645 |
0.2428 |
PAREDUC |
0 |
-0.0322 |
0.1053 |
0.0936 |
0.7596 |
PAREDUC |
1 |
0.0542 |
0.0809 |
0.4492 |
0.5027 |
PAREDUC |
3 |
0.1911 |
0.1133 |
2.8452 |
0.0916 |
PAREDUC |
4 |
0.1494 |
0.0966 |
2.3952 |
0.1217 |
PAREDUC |
5 |
0.2222 |
0.0747 |
8.8382 |
0.0029 |
PAREDUC |
6 |
-0.0264 |
0.0821 |
0.1032 |
0.7480 |
PAREDUC |
7 |
0.0131 |
0.1196 |
0.0120 |
0.9126 |
PAREDUC |
8 |
0.2842 |
0.2211 |
1.6515 |
0.1988 |
PAREDUC |
9 |
-0.0310 |
0.2679 |
0.0134 |
0.9079 |
ATTNPTRN |
2 |
-0.0947 |
0.0753 |
1.5829 |
0.2083 |
ATTNPTRN |
3 |
0.0109 |
0.0634 |
0.0297 |
0.8631 |
HIGHLVEX |
1 |
-0.2245 |
1.5699 |
0.0205 |
0.8863 |
HIGHLVEX |
2 |
0.0741 |
0.1528 |
0.2349 |
0.6279 |
HIGHLVEX |
4 |
-0.0346 |
0.0749 |
0.2135 |
0.6440 |
HIGHLVEX |
6 |
-0.00027 |
0.0844 |
0.0000 |
0.9975 |
HIGHLVEX |
7 |
-0.0717 |
0.1354 |
0.2806 |
0.5963 |
HIGHLVEX |
8 |
-0.0951 |
0.1687 |
0.3173 |
0.5732 |
DEPNUMCH3 |
1 |
-0.1560 |
0.0797 |
3.8335 |
0.0502 |
DEPNUMCH3 |
2 |
-0.0769 |
0.0923 |
0.6934 |
0.4050 |
DEPNUMCH3 |
3 |
-0.1094 |
0.1036 |
1.1158 |
0.2908 |
pellamt3 |
0 |
0.1412 |
0.1417 |
0.9931 |
0.3190 |
pellamt3 |
2 |
0.1643 |
0.0630 |
6.7940 |
0.0091 |
totaid4 |
0 |
-0.5287 |
0.1517 |
12.1442 |
0.0005 |
totaid4 |
2 |
-0.1542 |
0.1426 |
1.1688 |
0.2796 |
totaid4 |
3 |
-0.0285 |
0.1480 |
0.0371 |
0.8472 |
totaid4 |
4 |
-0.0404 |
0.1539 |
0.0690 |
0.7929 |
tfedaid4 |
0 |
1.4311 |
1.1730 |
1.4886 |
0.2224 |
tfedaid4 |
2 |
0.1218 |
0.1451 |
0.7047 |
0.4012 |
tfedaid4 |
3 |
0.1354 |
0.1463 |
0.8568 |
0.3546 |
tfedaid4 |
4 |
0.1021 |
0.1526 |
0.4482 |
0.5032 |
instamt3 |
0 |
-0.9268 |
1.1648 |
0.6332 |
0.4262 |
instamt3 |
2 |
0.1081 |
0.1668 |
0.4201 |
0.5169 |
instamt3 |
3 |
-0.0764 |
0.2445 |
0.0976 |
0.7547 |
UGDEG |
1 |
-0.0354 |
0.1016 |
0.1212 |
0.7277 |
UGDEG |
3 |
0.0635 |
0.0613 |
1.0745 |
0.2999 |
UGDEG |
4 |
0.7338 |
0.6588 |
1.2407 |
0.2653 |
JOBHOUR4 |
1 |
0.1214 |
0.0894 |
1.8426 |
0.1746 |
JOBHOUR4 |
3 |
0.0449 |
0.0754 |
0.3540 |
0.5519 |
JOBHOUR4 |
4 |
0.0368 |
0.0648 |
0.3232 |
0.5697 |
Attachment C. Targeting of current nonrespondents by institution group
The following graphs plot the all current nonrespondents within each instituion group that are eligible for targeting (excludes controls groups, double nonrespondent, or deceased). In each scatterplot, the red dots show top and bottom response propensity scores that are removed from consideration for targeting as these cases are highly likely to respond without additional efforts or highly unlikely to respond even with additional efforts. The upper cut-off is based on visual examination of each scatterplot by institution group.
The green dots display the targeted cases. These cases have the highest importance scores (high values of a-priori propensity times bias-likelihood shown in the upper right corner) among the remaining eligible cases. The blue dots show the remaining eligible cases that will not be targeted. The horizontal line on each scatterplot represents the median bias-likelihood for that institution group.
Targeting: Institution Group A
Targeting: Institution Group B
Targeting: Institution Group C
Targeting: Institution Group D
Targeting: Institution Group E
Targeting: Overall - All Institution Groups
1 This section addresses the following BPS terms of clearance: (1) From OMB# 1850-0631 v.8: “OMB approves this collection under the following terms: At the conclusion of each of the two monetary incentive calibration activities, NCES will meet with OMB to discuss the results and to determine the incentive amounts for the remaining portion of the study population. Further, NCES will provide an analytical report back to OMB of the success, challenges, lessons learned and promise of its approach to addressing non-response and bias via the approach proposed here. The incentive levels approved in this collection do not provide precedent for NCES or any other Federal agency. They are approved in this specific case only, primarily to permit the proposed methodological experiments.”; and (2) From OMB# 1850-0631 v.9: “Terms of the previous clearance remain in effect. NCES will provide an analytical report back to OMB of the success, challenges, lessons learned and promise of its approach to addressing non-response and bias via the approach proposed here. The incentive levels approved in this collection do not provide precedent for NCES or any other Federal agency. They are approved in this specific case only, primarily to permit the proposed methodological experiments.”
2 For the continuous variables, except for age, categories were formed based on quartiles.
3 Key variables will use imputed data to account for nonresponse in the base year data.
4 These adjustments will help ensure that currently over-represented groups, high propensity/low importance cases, and very-difficult-to-convert nonrespondents are not included in the target set of nonrespondents. The number of targeted cases will be determined by BPS staff during the phased data collection and will be based on the overall and within sector distributions of importance scores.
5 A prepaid check will be mailed to sample members who request it. Sample members can also open a PayPal account when notified of the incentive. Any prepaid sample member who neither accepts the prepaid PayPal incentive nor check would receive the full incentive amount upon completion by the disbursement of their choice (i.e. check or PayPal).
6 All double nonrespondents will be offered the $10 pre-paid PayPal amount in an attempt to convert them to respondents. For all monetary incentives, including prepayments, sample members have the option of receiving disbursements through PayPal or in the form of a check.
7 After 30 cases were excluded from the BPS:12/17 sample to comply with OFAC sanctions, two cases that were originally assigned to the previous respondent control group were removed.
8 Calculated using SAS Proc Power. https://support.sas.com/documentation/cdl/en/statugpower/61819/PDF/default/statugpower.pdf
9 After 30 cases were excluded from the BPS:12/17 sample to comply with OFAC sanctions, one case that was originally assigned to the double nonrespondent treatment group that received the $75 offer for a full interview was removed.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Memorandum United States Department of Education |
Author | audrey.pendleton |
File Modified | 0000-00-00 |
File Created | 2021-01-22 |