Supporting Statement for Request for approval of two tests for the Survey of Doctorate Recipients (SDR)

SDR Generic Clearance Submission - Cover Memo (Final 09-15-20).docx

SRS-Generic Clearance of Survey Improvement Projects for the Division of Science Resources Statistics

Supporting Statement for Request for approval of two tests for the Survey of Doctorate Recipients (SDR)

OMB: 3145-0174

Document [docx]
Download: docx | pdf



Memorandum


Date: September 15, 2020


To: Margo Schwab, Desk Officer

Office of Management and Budget


From: Emilda B. Rivers, Director

National Center for Science and Engineering Statistics

National Science Foundation


Via: Suzanne Plimpton, Reports Clearance Officer

National Science Foundation


Subject: Request for approval of two tests for the Survey of Doctorate Recipients (SDR)

Shape1

The National Center for Science and Engineering Statistics (NCSES) within the National Science Foundation (NSF) plans to conduct two tests for the Survey of Doctorate Recipients (SDR) under the generic clearance for survey improvement projects (OMB number 3145-0174). This memorandum describes the justification and plans for these tests.

The first test, the Dependent Interviewing Survey (DIS) Pilot Study, is an experiment with both quantitative and qualitative components that will assess the use of dependent interviewing.1 The goal of the DIS Pilot Study is to determine an appropriate questionnaire design approach for the 2021 SDR. The second test, the Locating Feasibility Study, is designed to assess changes in the SDR locating protocol to improve rates of contact with the SDR sample members. The Locating Feasibility Study will inform contacting procedures and help define a realistic timeline for the 2021 SDR.

The samples for the DIS Pilot Study and the Locating Feasibility Study are independent of one another, with no overlap in selected cases between the two studies.

These tests are being conducted in response to OMB’s terms of clearance issued for the 2019 SDR that “In the period between the 2019 data collection and the planned 2021 data collection, a follow-up instrument will be designed for the longitudinal component.” The DIS Pilot Study is designed to identify an appropriate follow-up instrument for the longitudinal component and the Locating Feasibility Study is designed to increase the survey’s ability to locate the selected sample members to complete the survey.

This submission is organized with a separate section for each test. However, Table A is a summary level burden table that shows the full burden estimate across the two tests. The burden estimate for each test is also included in their respective sections of the submission memo.

Table A: Total burden estimate for two SDR tests

Test

Total sample

Contact Burden

Survey Burden

Total Burden estimate (hours)

Max count of possible contacts

Avg. Minutes per contact

Maximum “contact” burden (Hours)

Estimated Survey Response Rate

Avg. minutes per survey

Total “survey” burden (hours)

DIS Pilot Study

3,900

6

0.25

97.5

50%

15

487.5

585

Locating Feasibility Study

1,000

3

0.25

12.5

75%

9

112.5

125

TOTAL


710


Table B: Table of Contents

Section

Starting Page

SDR Test 1: DIS Pilot Study

3

SDR Test 2: Locating Feasibility Study

11

Attachment A: DIS Pilot Study Communication Materials

17

Attachment B: DIS Pilot Study Independent Interview (INDI) Questionnaire

25

Attachment C: DIS Pilot Study One-Stage Dependent Interview (DI-1) Questionnaire

42

Attachment D: DIS Pilot Study Two-Stage Dependent Interview (DI-2) Questionnaire

67

Attachment E: DIS Pilot Study Response Analysis Survey (RAS) Questionnaire

91

Attachment F: DIS Pilot Study Analysis Plan

96

Attachment G: DIS Pilot Study Works Cited

106

Attachment H: Locating Feasibility Study - Process Flow Overview

107

Attachment I: Locating Feasibility Study Communication Materials

108

Attachment J: Locating Feasibility Study - Locating SPV Instrument

111



SDR Test 1: DIS Pilot Study

Background

In response to a) the 2019 SDR terms of clearance issued by OMB and b) feedback received from SDR respondents asking for a more efficient questionnaire, NCSES proposes to conduct a methodological experiment with three instrument design treatments. This experiment will serve three purposes:

  • inform the 2021 SDR data collection instrument design,

  • identify which of the three treatments yields high data quality and is also perceived positively by the SDR population, and

  • fill a gap in the literature regarding the use of dependent interviewing for self-administered questionnaires

The experiment will include a sample of respondents to the 2019 SDR, as well as a small sample of respondents who participated in the 2015 or 2017 SDR cycles, but who did not participate in 2019. Including cases who have not responded in the most recent SDR cycle will allow us to gather information from respondents who may have a different perception of seeing their response data from 3 or more years ago displayed in the instrument they are asked to complete.

Cases will be randomly assigned to one of the following three experimental treatment groups:

  • an abbreviated version of the usual SDR questionnaire that uses independent interviewing (control group);

  • a dependent interview questionnaire that uses a single stage task to collect updated information (DI-1) using prefilled information from a prior cycle, and covering the same content as the control group, or;

  • a dependent interview questionnaire that uses a two-part question to collect updated information (DI-2) also using prefilled information from a prior cycle, and also covering the same content as the control group.

Dependent interviewing is a technique increasingly used in panel surveys, in which substantive answers from prior cycles are “fed forward and used to tailor the wording and routing of questions or to include in interview edit checks” (Jackle, 2006). Dependent interviewing can reduce measurement error by reducing repetitiveness and burden, aiding respondent recall, reducing spurious change, and generally providing respondents with a sense of continuity over the course of their participation in the panel (Pascale and Mayer, 2004). Several major household surveys in the U.S. use a dependent interviewing approach, including for the household rostering process (Panel Survey of Income Dynamics, National Longitudinal Survey of Youth), for tracking health conditions and health care providers (Medical Expenditure Panel Survey, Medicare Current Beneficiaries Survey), and to measure changes in labor force participation (Survey of Income and Program Participation, Panel Survey of Income Dynamics).

Much of the research on dependent interviewing has focused on testing “proactive” dependent interviewing (PDI) versus “reactive” dependent interviewing (RDI) (Sala and Lynn, 2004; Lynn et al, 2005; Hoogendoorn, 2004; Lugtig and Lensvelt-Mulders, 2014; Al Baghal, 2017). In RDI, respondents are asked the question, and data from previous surveys are used as edit checks. In PDI, respondents are provided with the answers to the prior wave within the survey question, and are asked whether this information is still correct. Several studies have used PDI for recall items, with results suggesting that PDI increases data quality by reducing the spurious change frequently found in panel surveys (Hoogendoorn 2004; Jackle 2009; Lynn and Sala 2006).

Nearly all of these studies are based on computer administered interviews with an interviewer asking the questions. The PDI approach typically uses two steps to ask the question, with the interviewer presenting the prior wave’s data and asking if this data is “still correct” (or some variation of this language). If the information is not still correct, they are asked to provide updated information. Only a small handful of studies, all conducted in Europe, have explored the use of DI in a web-based self-administered survey (Hoogendoorn, 2004; Lugtig and Lensvelt-Mulders, 2014; al Baghal, 2017). Hoogendoorn (2004) seems to be the only study that tested a one-stage approach to DI, in which web respondents are shown their prior wave responses and can edit the data on that screen, without having to answer yes/no if the information is still correct. NCSES believes this approach merits additional research, especially as household respondents have become accustomed to filling out electronic forms in which they can simply update prior information, rather than having to be asked if each data element is still correct. This is the impetus for the experiment embedded in the DIS Pilot Study, namely, to test a one-stage DI approach versus a two-stage approach vs. no DI, or independent interviewing. The results of this experiment will provide guidance about whether to implement a DI instrument in the 2021 SDR longitudinal subsample, how to structure DI in a self-administered survey, and add to the small body of literature on use of DI methods in self-administered forms.

Purpose of research

Because the literature available focuses primarily on interviewer-administered surveys, there is not an established best practice for applying dependent interviewing to a self-administered survey. In the 2017 SDR data collection, 84% of the completed surveys came from web self-administered questionnaires, and in the 2019 data collection, 93% of the surveys were completed via the web questionnaire. With an increasing proportion of the sample completing via a self-administered web instrument, NCSES proposes conducting an experiment that tests two possible dependent interviewing approaches for the web instrument that can be compared to the usual independent interviewing version of the questionnaire, as well as to each other.

In addition to the INDI, DI-1, or DI-2 questionnaire items, respondents will also be asked to complete a response analysis survey (RAS) as part of the Pilot Study. The RAS collects data on respondents’ experience and reactions to the questionnaire version they completed. Including the subjective measures regarding respondent perceptions is important to consider in deciding the approach for 2021 SDR given the need to maintain the cooperation of sampled members across many cycles of data collection.

The DIS Pilot Study will address the following three research questions:

  1. Does dependent interviewing reduce the time to complete the SDR as compared to the standard independent interviewing approach currently used in the survey?

  2. Does dependent interviewing affect response quality (e.g., item nonresponse) and the measurement of employment changes relative to independent interviewing?

  3. What do respondents think of the experience of responding to a pre-filled, web-based questionnaire?

The DIS Pilot Study will also evaluate whether the two dependent interviewing approaches differ in their results to these same research questions.

  1. Is there a difference in completion time between DI-1 and DI-2 instrument designs? How does the time to complete for each compare to the time to complete the standard independent interviewing approach used in the control group?

  2. Is there a difference in response quality (e.g., item nonresponse) and measurement of employment changes between DI-1 and DI-2 instrument designs? How does the response quality (e.g., item nonresponse) and the measurement of employment changes for each DI approach compare to the control group?

  3. Are there differences in how DI-1 and DI-2 respondents think of their experience in responding to the pre-filled web-based questionnaire? How do the perceptions of each DI approach compare to the control group?

Participants

3,900 cases will be selected from the 2019 SDR production sample living within the US, and all will be invited to participate in the DIS Pilot Study. Since the DIS Pilot Study content focuses on employment status and occupation, the study will only select from sample members who reported working in their last cycle of participation. Of the total 3,900 sampled cases, 3,600 cases will be selected from sample members who completed the 2019 survey and an additional 300 cases that did not participate in 2019 but last participated in 2015 or 2017. Respondents who provided critical items only (CIO) will not be eligible for selection for the DIS Pilot Study because the two dependent interview approaches require items beyond the CIO items to prefill the questionnaire. Including only non-CIO respondents will help minimize differences between sample members assigned to the three treatment groups.

Table 1 below shows the selection from the past three SDR cycles.

Table 1: DIS Pilot Study sample allocation by cohort and year of last response


Last responded: 2015

Last responded: 2017

Last responded: 2019

2015 cohort

100

100

3,600

2017 new cohort


100

2019 new cohort & supplemental



The 3,600 cases will be selected among the 2019 eligible respondents after stratifying the cases into 8 cells defined by gender, 2 race/ethnicity categories (minority, other) and 2 cohorts (new 2019 cohort, other). The sample allocation will be finalized after considering the numbers available in each cell, but it is expected to be a modified version of proportional allocation, so that cells in which the proportional allocation does not provide sufficient sample size are adjusted. Within each cell, cases will be sorted by field of degree prior to drawing the sample by systematic sampling.

Data Collection Methodology

All 3,900 sampled cases will receive up to six communications requesting their participation in the DIS Pilot Study, sent via mail, email or both depending on the contact information available for each case. All sample cases will follow the same contact strategy, regardless of the assigned instrument version. The content of these contacts can be found in Attachment A. Table 2 shows the DIS Pilot Study contact strategy.

Table 2: Sequence of DIS Pilot Study contacts by mode and timeline

Contact

Mode

Schedule

Who receives?

Invitation letter

Mail

Day 1

All sample members with a valid mailing address

Invitation email

Email

Day 7

All sample members with a valid email address

Reminder postcard

(folded)

Mail

Day 11

Sample members who have not yet responded, and for whom there is a valid mailing address (e.g., will exclude Postal Non-Deliverable, PND)

Reminder email

Email

Day 16

Sample members who have not yet responded, and for whom there is a valid email address

Non-response follow-up letter


Mail

Day 30

Sample members who did not respond to any prior contact, and for whom there is a valid mailing address

Non-response follow-up email

Email

Day 35

Sample members who did not respond to any prior contact, and for whom there is a valid email address


All sampled cases will receive a debit card loaded with $30 in the survey invitation mailing as a token of appreciation. Subsequent communications will note the prior mailing of the debit card. The language for all communications are included in Attachment A.

As previously noted, participants will be randomly assigned to one of the following experimental conditions:

  • Independent-interviewing questionnaire (INDI)

  • One-stage dependent interviewing questionnaire (DI-1)

  • Two-stage dependent interviewing questionnaire (DI-2)

Each of the three instruments follow the introduction screens used in prior SDR cycles, starting with informed consent information and general navigation instructions. Continuing with the flow from prior SDR cycles, each instrument will then ask the Sample Person Verification (SPV) items that ask about field and year of degree, as well as the doctoral awarding institution. Responses to these items allow the SDR to verify that the respondent is the intended sampled person. The SPV portion of the instrument has historically used a two-stage dependent interview approach, prefilling data from the Doctorate Record File (DRF). The DIS Pilot Study will use the exact same SPV questions as asked in the production SDR survey for the INDI and DI-2 version of the instrument. The DI-1 version of the instrument maintains the same content, but presents the questions in the one stage format to keep internal consistency with the presentation of prefilled information within the DI-1 instrument.

The look and feel of the instrument and methods of navigation will match those used in the 2019 data collection and will be the same across the instruments. All three instruments are designed to render correctly for mobile data collection.

Upon completion of the INDI, DI-1, or DI-2 portion of the survey, the application will automatically route respondents to the Response Analysis Survey (RAS). Respondents will see a transition statement at the start of the RAS so it is clear that this portion of the instrument collects their feedback on the test version of the SDR they just completed.

The instrument specifications for the INDI, DI-1, and DI-2 instruments, including the introductory screens and Sample Person Verification (SPV) series, are included in Attachment B, C and D, respectively. The specifications for the RAS that follows each survey are shown in Attachment E.

NCSES’s contractor for this study will have a toll-free help line, and an email box for respondents to contact with questions or concerns throughout the data collection cycle.

Analysis Plan

The analysis will include both quantitative and qualitative components. For the quantitative analysis, tests for statistical differences between the DI-1, DI-1 and INDI versions will be conducted for several measures addressing the research questions about 1) burden, 2) quality and 3) respondent perceptions of the version completed.

  1. Does the use of dependent interviewing reduce the time to administer the SDR?

    1. Compare overall timing, section timing, and timing on individual questions on DI-1, DI-2, and INDI conditions (from paradata)

    2. Compare DI-1, DI-2, and INDI conditions on perceived speed at which they moved through the survey (from item RAS1)


  1. Does the use of dependent interviewing affect response quality?

    1. Compare DI-1, DI-2, and INDI conditions on rates of reported change in employment status, employer, or occupation (Literature suggests DI approaches reduce reports of spurious change), overall and by specific characteristics (as sample size permits):

      1. By cohort (defined as categories based on year since degree)

      2. By other demographic characteristics (e.g., gender, citizenship, etc)

    2. Compare internal consistency of reported changes in employer and occupation between DI-1, DI-2, and INDI conditions (e.g., responses to A9 vs B1/B2 in DI instruments)

    3. Compare number of characters in verbatim responses for principal job title, and job duties between DI-1, DI-2, and INDI.

    4. Compare DI-1, DI-2, and INDI conditions on item nonresponse rates

    5. Evaluate negative response patterns suggestive of poor data quality, specific to each DI version:

      1. For DI-2, changing the answer to first part after presented with screen to enter updated information

      2. For DI-1, editing prefilled data indicating a change, but also checking “no change” box


  1. What do respondents think of the experience of dependent interviewing (in comparison to independent interviews)? (These questions are answered by analyzing RAS response data.)

    1. (For all conditions) Overall, was this survey experience similar to most other web survey experiences? (RAS2)

    2. (For all conditions) How enjoyable was the survey experience? (RAS3)

    3. (For all conditions) Perceived sensitivity of the survey (RAS4)

    4. (For all conditions) Confidence in the protection of the data on the survey (RAS5)

    5. (For DI conditions) Have respondents participated in a survey (web or other) where the vendor had historical information from them or another source? (RAS8)

    6. (For DI conditions) Reactions to seeing their historical information displayed on the survey/(For INDI condition) How they might react if they saw their historical information displayed on the survey (RAS9)

    7. (For DI conditions) Do they remember doing the SDR in the past? (RAS6)

    8. (For DI conditions) Did they remember their responses from last time they completed the SDR? (RAS7)

    9. (For DI conditions) Did displaying their historical information help them to provide more accurate data?/(For INDI condition) Would displaying their historical information help them to provide more accurate data? (RAS11)

    10. (For DI conditions) How did respondents decide when to update information versus leave it as-is? (RAS12, RAS13)

    11. (For DI conditions) Did displaying their historical information change the perceived burden of the survey experience/(For INDI condition) Would displaying their historical information change the perceived burden of the survey experience (RAS14)


    1. (For all conditions) Do they think dependent interviewing is a good or bad idea (RAS16)

Attachment F contains the planned table shells for this analysis.

The qualitative component of the analysis involves coding the open-ended items designed to capture reasons for the respondents’ perceptions, for each of the following constructs:

  • Explanation of their reactions to the DI approach

  • Reasoning for leaving answers as-is rather than providing more accurate information

  • Additional suggestions for how to improve the way employment is asked about on SDR

Burden Hours

NCSES estimates a total burden of 585 hours for this study. Table 3 shows the components of the estimated total burden.

Table 3: Components of DIS Pilot Study estimated burden

Test

Total sample

Contact Burden

Survey Burden

Total Burden estimate (hours)

Max count of possible contacts

Avg. Minutes per contact

Maximum “contact” burden (Hours)

Estimated Survey Response Rate

Avg. minutes per survey

Total “survey” burden (hours)

DIS Pilot Study

3,900

6

0.25

97.5

50%

15

487.5

585





Respondents with both a valid mailing and email address may receive up to 6 contacts requesting their participation in the survey. The content of each communication is brief and should take no more than 15 seconds to read. If all 3,900 sample members receive all 6 communications, NCSES estimates 97.5 “contact” burden hours ((3,900 x 6 x 0.25)/60).

On average, the survey will take 15 minutes to complete for each version. Even though the 2019 SDR weighted response rate was 69%, because this is a test rather than a production cycle, and the data collection period is significantly shorter than a production cycle, the estimated DIS Pilot Study response rate is 50%. The total survey burden is estimated at 487.5 hours ((3,900 X 0.5 x 15)/60).

The estimated maximum contact burden and total survey burden results in a total burden estimate for the DIS Pilot Study of 585 hours (97.5 + 487.5).

Payment to Participants

NCSES will offer a $30 incentive to all respondents. Each sampled person will receive a debit card loaded with $30 in the survey invitation letter. Unused funds by the card expiration date will be returned to NCSES.2 The incentive amount for the DIS Pilot Study will be the same as the SDR production survey for the following reasons:

  • Full participation in the DIS Pilot Study includes: (1) completing a subset of questions included on the SDR production questionnaire and (2) completing a debriefing, cognitive interview style questionnaire, the Response Analysis Survey (RAS). This additional debriefing, cognitive interview style questionnaire is an extra task relative to the SDR production questionnaire.

  • The DIS Pilot Study is an “extra” cycle of participation in that all sampled members will have recently participated in the 2019 SDR and will be asked to participate in the 2021 SDR less than a year after participating in the DIS Pilot Study.

  • The quality of the SDR production questionnaire depends on participant cooperation.

  • NCSES needs to make final decisions regarding the content and design approach for the 2021 data collection at the close of 2020 in order to start 2021 data collection prior to the fall of 2021. Thus, the DIS Pilot Study will only have a seven-week data collection period. Providing a $30 incentive may increase the rate of return helping NCSES attain a 50% response rate even with only a seven-week data collection cycle.

Informed Consent

The informed consent procedures will follow those used in a regular SDR production cycle. The first screen of the survey instrument is an introduction, identifying the survey sponsor and for the DIS Pilot Study will also note that this is a special study designed to evaluate methods to improve the 2021 instrument. As with the SDR production, the second screen covers informed consent, providing the authorization for this collection, assurance of confidentiality, OMB control number, estimated time to complete the survey, and the voluntary nature of the collection. The details of the informed consent screens are shown in the Assurance of Confidentiality sub-section at the beginning of the SPV section of Attachments B, C, and D.

Survey Schedule

The tentative schedule for the survey is as follows:

Proposed Date

Activity/Deliverable

October 12, 2020

Start data collection

November 30, 2020

Close data collection

December 22, 2020

Final report and recommendations

January 8, 2020

NCSES decisions for 2021 instrument

Attachments

Attachment A: DIS Pilot Study Communication Materials

Attachment B: DIS Pilot Study Independent Interviewing (INDI) Questionnaire

Attachment C: DIS Pilot Study One-Stage Dependent Interview (DI-1) Questionnaire

Attachment D: DIS Pilot Study Two-Stage Dependent Interview (DI-2) Questionnaire

Attachment E: DIS Pilot Study Response Analysis Survey (RAS) Questionnaire

Attachment F: DIS Pilot Study Analysis plan

Attachment G: DIS Pilot Study Works Cited





<End SDR Test 1>

SDR Test 2: Locating Feasibility Study

Background

Locating, the process of finding the most current contact information (e.g., postal address, email address, phone number) for sample members, is a critical and challenging task for the SDR. Sample members in the SDR are in a wide range of age groups and may have received their PhD as much as twenty years ago or as recently as a couple years ago. Some sample members are more mobile than others, and for some the only contact information available is several years old.


The 2019 SDR locating approach suffered from various inefficiencies, including the use of only passive locating methods (rather than direct contact methods) prior to the production data collection and variation in the quality of locating databases among vendors for use in batch tracing3. As a result, in the 2019 SDR survey cycle, over 600 sample members were not locatable; approximately 140 failed the Sample Person Verification (SPV)4; and close to 25,000 provided no feedback that could indicate whether the intended sampled person had been reached.


NCSES proposes a Locating Feasibility Study to evaluate procedural changes and locating IT system enhancements in an attempt to address these issues. In addition, the Locating Feasibility Study will evaluate vendors for batch tracing. This study is designed to inform a more refined locating approach for the 2021 SDR, and assist in the determination of appropriate staffing levels to implement the approach.

Purpose of research

There are three goals for the Locating Feasibility Study:

  1. Improve the efficiency and effectiveness of the SDR locating approach and introduce corresponding IT system enhancements.

  2. Evaluate new batch tracing vendors. Batch tracing is a critical part of an efficient locating approach, and the SDR will need to have a vendor in place for the 2021 cycle.

  3. Use a multi-mode approach for direct contact with sample members to update or confirm their contact information. This effort will assess the effectiveness and costs of using an advance letter mailing and email contact that provide a link to a secure web page for participants to update their contact information.


Methodology

There are three stages within the proposed Locating Feasibility Study.

  1. Batch tracing,

  2. Direct contacts with sample members via mail and email,

  3. If mail and email direct contacts are not successful (nonresponse or failed SPV), cases will proceed to blended tracing. Blended tracing starts with manual tracing and will include direct contact by phone or email to confirm newly found information. Additionally, a regression model will be used to determine a locatability score to inform the priority for working cases in blended tracing.

At the end of the first and third stage, a quality score will be calculated for each piece of contact information identified for a case (e.g., a quality score for mailing address, email address and phone number). The quality score serves as an indicator of confidence that the identified piece of contact information reaches the intended sample person. The initial quality score will be calculated based on batch tracing results and known information from the 2019 cycle. The final quality score will build upon the initial quality score, and will additionally account for information acquired through the direct contact methods and blended tracing. The quality score provides an indication of the reliability for each piece of contact information. In addition, the quality score can serve as a ranking when more than one piece of contact information is available in a given mode (mail, email, phone) for a sample member.



Attachment H shows the flow of cases and decision points across the three stages of the Locating Feasibility Study.



Stage 1 – Batch tracing

All 1,000 cases in the Locating Feasibility Study will be sent to three batch locating vendors5. Two of the three vendors are able to append mailing address, phone number, and potentially email from the U.S. and Canada as well as verify format of contact information in other parts of the world. (However, due to the General Data Protection Regulation (GDPR), none of the three vendors can provide updated contact information for sample members residing outside of the U.S. or Canada.) The third vendor was used in the 2019 SDR survey cycle to identify email addresses for sample members. Other than email addresses provided by the participant directly, this vendor was the best source for email addresses. For the batch tracing evaluation, this third vendor will serve as a comparison point for the other two new vendors in regard to email addresses.


As noted above, results from batch tracing will assign an initial quality score per piece of contact information (e.g., a separate score for mailing address, email, and phone number). This information, along with results of an analysis of the locating sources used in the 2019 SDR, will be used as input to the next stage in the study.


Stage 2 – Direct contact with sample members

In the second stage of the Locating Feasibility Study, all cases with a mailing or email address will be sent a letter and an email (see Attachment I) inviting the sample member to update their contact information via a link to a secure web page. A reminder email will be sent a week later. Once at the secure web page, the sample member will complete an instrument containing the locating version of the SPV and update or verify their contact information. The purpose of the locating SPV is to ensure that only the intended person is seeing the contact information on record from prior cycles. See Attachment J for a draft version of this instrument. The locating SPV instrument verifies the person’s name, institution, field of study, and year. The locating instrument will present each of these items in a multiple-choice format in order to instantaneously evaluate results without divulging Personally Identifying Information (PII) if the respondent is not our sample member. If the respondent passes the SPV, he or she will be presented the contact information on file and asked to update or confirm.


Stage 3 – Blended tracing

If a case is still a nonrespondent after two weeks since the last contact or the available information is not valid, the case will proceed to the next stage which is blended tracing. Blended tracing includes two steps. First, a case is researched using manual tracing. Manual tracing researches sample members individually in public and subscription databases. Next, data collectors will then use the newly found phone or email information to attempt direct contact with the sample member to conduct the locating SPV and confirm the new contact information actually reaches the intended sample person.


With a sample size of 1,000, staff will have the ability to attempt contacts with each of the cases in this third stage who had not responded to the stage two direct contact methods, or for whom the direct contact methods indicated the correct person had not been found, per their SPV responses. However, statistical staff plan to calculate a locatability score for each case at this stage to set the case priority order for the tracing staff working the cases. The locatability score at the sample member or case level will be calculated based on the level of effort in locating and contacting as well as response status from the 2019 SDR survey cycle. This differs from the quality score calculated in the first stage which applies to each piece of contact information for a person, rather than the sample member him- or herself. To calculate the locatability score, the SDR 2019 case locating outcomes will be analyzed to look for the main covariates associated with successful locating. Because the true outcome (located vs. not located, by type of locating information) cannot be determined for a large fraction of cases, different measures of success will be compared, ranging from multiple sources confirming an address to successful completion of the survey. Once suitable measures have been obtained, logistic regression modeling will be used to determine the best set of predictive case characteristics among those available. These scores will be further evaluated as part of the Locating Feasibility Study and including the batch tracing vendor outcomes, with the goal of developing predictive locatability scores for use in the 2021 SDR.

Analysis Plan

Quality and accuracy of the batch vendor outputs will be evaluated in two steps:


Step 1 – Reflective of the initial quality score, at the end of stage1, as input to stages 2 and 3:

  • Comparing the vendor output to the information on the sample file (case management system)

  • Making direct comparison between vendor outputs

Step 2 - At the close of stage 3 of the Locating Feasibility Study:

  • Comparing vendor outputs to the results of the blended tracing efforts

  • Comparing vendor outputs to contact information provided directly by the Locating Feasibility Study respondents in response to the direct contact methods in stage two,


The overall locating process will be evaluated by assessing the following:

  • the number of cases with updated or confirmed contact information

  • the method or stage which yielded the updated or confirmed information

  • a quality assessment, using the quality scores for each piece of information, based on whether updated or confirmed directly by the sampled person, and the confidence that the identified piece of contact information reaches the intended sample person..


Each of the above metrics will be assessed across the 1,000 case sample, as well as by year of degree, last cycle of participation, and location.


To accommodate the capture of SPV information for each piece of contact information (e.g., mailing address, email, and phone), the locating IT systems will need modifications from what was used in 2019. As part of the Locating Feasibility Study, the contractor will conduct debriefing with the staff conducting locating to gather usability feedback regarding the IT system updates.

Participants

A sample of 1,000 will be selected from the eligible 2021 SDR sample cases, selecting a variety of sample members such as those who responded in 2019 but did not provide contact information in the survey, those who did provide contact information in the survey, sample members who did not complete in 2019, as well as known international cases. For cases who did not participate in 2019, sampling will include sample members who received their PhD across a range of years. The sample for the Locating Feasibility Study is independent from the sample for the DIS Pilot Study, without any overlap in cases between the two samples.

Burden Hours

NCSES estimates a total burden of 137.5 hours for this Locating Feasibility Study.

Table 1: Components of estimated burden for the Locating Feasibility Study

Test

Total sample

Contact Burden

Survey Burden

Total Burden estimate (hours)

Max count of possible contacts

Avg. Minutes per contact

Maximum “contact” burden (Hours)

Estimated Survey Response Rate

Avg. minutes per survey

Total “survey” burden (hours)

Locating Feasibility Study

1,000

3

0.25

12.5

75%

9

112.5

125

NCSES estimates that all 1,000 sample members will read each of the 3 contacts sent in stage two inviting them to confirm or update their contact information. The content of each communication is brief and should take no more than 15 seconds to read. If all 1,000 sample members receive all 3 communications, NCSES estimates 12.5 “contact” burden hours ((1,000 x 3 x 0.25)/60)

On average, the locating SPV survey will take 9 minutes to complete. Online participants will likely take less than 9 minutes, but phone participants will take more as locaters need to confirm spelling. NCSES expects 75% of the Locating Feasibility Study cases to complete the locating SPV survey in either stage two or three. Therefore, NCSES estimates a survey burden of 112.5 hours ((1,000x 0.75 x 9)/60).

The estimated maximum contact burden and total survey burden results in a total burden estimate for the Locating Feasibility Study of 125 hours (12.5 + 112.5).

Payment to Participants

No incentive will be offered to sampled cases for the Locating Feasibility Study. While this is a feasibility study for the SDR contractor, the contact information gained or updated as a result will be used in 2021 much like the historical SDR prefield locating work which has not included incentives.

Informed Consent

The invitation to participate includes the OMB control number and the assurances of confidentiality. Since only publicly available information will be collected or updated in the Locating Feasibility Study, and the risk to participants is quite low, NCSES requests that a person’s participation serve as implied consent.

Survey Schedule

The tentative schedule for the survey is as follows:

Proposed Date

Activity/Deliverable

September 7, 2020

Conduct Batch locating

Oct 26, 2020

Start mail and email direct contacts with sample member and start blended locating

January 10, 2020

Final report with recommendations for 2021 SDR

Attachments

Attachment H: Locating Feasibility Study Process Flow Overview

Attachment I: Locating Feasibility Study Communication Materials

Attachment J: Locating Feasibility Study Locating SPV Instrument


Contact Person

John Finamore

Program Director

Human Resources Statistics Program

National Center for Science and Engineering Statistics

National Science Foundation

jfinamor@nsf.gov

703-292-2258


1 Dependent interviewing is a questionnaire design approach in which information about each respondent known prior to the interview is used within the questionnaire wording. This method of personalizing questionnaires has the potential to reduce respondent burden and measurement error. In the DIS Pilot Study, prior information collected from past SDR cycles will be incorporated within the question wording to remind respondents of previous responses.

2 Historically, only 50%-60% of incentivized respondents in the SDR production data collection effort have used the incentive. All funds associated with unused incentives are returned to NCSES.

3 The SDR has historically contracted with batch tracing vendors to get updated mailing addresses and phone numbers, and in more limited instances, an email address for sample members. Batch tracing vendors process sample records that include address or phone as known from the prior cycle, and append updated information based on the vendor’s database records which often reflect information from credit bureaus and public records such as real estate, utilities, obituaries, etc.

4 The Sample Person Verification checks year, institution and field of degree to verify that the person is the intended sampled member. When a sample member fails SPV it means that the wrong person was found and interviewed.

5 In addition to the 1,000 cases selected for the Locating Feasibility Study, the 3,900 cases from the DIS Pilot Study as well as an additional 1,100 cases will be put through batch tracing. The costs are the same for using the batch tracing services for up to 6,000 cases. Sending the full 6,000 cases will provide more variation in sample member characteristics allowing for a more robust evaluation of the different vendors. The samples for the Locating Feasibility Study, the DIS Pilot study, and the additional 1,100 cases are independent and do not overlap.

13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJonathan Gordon
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy