Survey Sample Plan / Methodology

Service Level Measurement -Education Sample Plan (06.2020).docx

Clearance for A-11 Section 280 Improving Customer Experience Information Collection

Survey Sample Plan / Methodology

OMB: 2900-0876

Document [docx]
Download: docx | pdf



Service Level Measurements Education Service Survey

Sampling Methodology Report



Prepared by

Veteran Experience Office

Version 1

June 2020

Contents



Executive Summary

The Education Service Survey is designed to measure customer experience of beneficiaries at three key touch points or moments that matter: 1) after applying for benefits, 2) after enrollment, and 3) after processing of payments. These three moments that matter will constitute the three Survey Types.

Veterans experience data will be collected using an online transactional survey disseminated via an invitation email sent to selected beneficiary. The data collection will occur every two weeks for most Benefit Types and Survey Types. For the Montgomery GI Bill (Chapter 30 and Chapter 1606), enrollment data will only be available for extract once a month. Therefore, enrollment Beneficiaries for these Benefit Types and Survey Type will only be surveyed once a month.

The survey questionnaire is brief and contains general Likert-scale (a scale of 1-5 from Strongly Disagree to Strongly Agree) questions to assess customer satisfaction as well as questions assessing the knowledge, speed, and manner of the interaction. These questions have been mapped to the OMB A-11 Customer Experience drivers. After the survey has been distributed, recipients have two weeks to complete the survey. Invitees will receive a reminder email after one week.

The sample will be distributed across three Education Benefit Programs (Post-9/11 GI Bill (Chapter 33), Montgomery GI Bill – Active Duty (Chapter 30), Montgomery GI Bill – Selected Reserve (Chapter 1606)). Since the Post-9/11 GI Bill covers most of the Beneficiaries, the sample size for this benefit type is determined so that the reliability of monthly survey estimate is 3% Margin of Error at a 95% Confidence Level for each of the three encounter types. The remaining benefit type will be treated as a census where all available sample will be used. This is required in order to secure sufficient sample for stable comparisons over time.

This report describes the methodology used to conduct the Education Service Survey. Information about quality assurance protocols, as well as limitations of the survey methodology, is also included in this report.

Part I – Introduction

A. Background

The Enterprise Measurement and Design team (EMD) is part of the Veterans Experience Office (VEO). The EMD team is tasked with conducting transactional surveys of the Veteran and Beneficiary population to measure their satisfaction with the Department of Veterans Affairs (VA) numerous benefit services. Thus, their mission is to empower Veterans by rapidly and discreetly collecting feedback on their interactions with such VA entities as NCA, VHA, and VBA. VEO surveys generally entail probability samples which only contact minimal numbers of beneficiaries necessary to obtain reliable estimates. This information is subsequently used by internal stakeholders to monitor, evaluate, and improve beneficiary processes. Beneficiaries are always able to decline participation and have the ability to opt out of future invitations. A quarantine protocol is maintained to limit the number of times a beneficiary may be contacted, in order to prevent survey fatigue, across all VEO surveys.

The VA provides educational benefits to Veterans, servicemembers, and qualified family members, to ensure they have a full and fair opportunity to advance their skills. VBA oversees several programs that support active and former Armed Forces personnel with furthering their education and/or receiving approved training. Statistics show that 909,320 beneficiaries took advantage of these programs in fiscal year 2019.

In order to continue to provide quality service to beneficiaries, VEO has been commissioned to measure the satisfaction of the VA education beneficiaries. To complete this goal, VEO proposed to conduct a brief transactional survey on selected beneficiaries who used the education benefits. Any beneficiary who interacted with the three covered programs (Chapter 1606, Chapter 33, and Chapter 30) within the past two to four weeksi are eligible for participation in the Education Service Survey. The surveys consist of five to seven questions that were developed from a human-centered design, focusing on beneficiaries’ experience with regard to their recent encounter pertaining to the notions of the seven OMB A-11 Customer Experience Drivers (Trust/Confidence, Ease/Simplicity, Efficiency/Speed, Quality, Employee Helpfulness, Equity/Transparency, and Satisfaction) These Likert-scale (a scale of 1-5) questions are designed through extensive beneficiary input and recommendations from subject matter experts in the VA.

Beneficiaries are selected to participate in the survey via an invitation email. A link is enclosed so the survey may be completed using an online interface, with customized participant information. The data is collected on a biweekly or monthly basis. The purpose of this document is to outline the planned sample design and provide a description of the data collection and sample sizes necessary for proper reporting.


B. Basic Definitions

Coverage

The percentage of the population of interest that is included in the sampling frame.

Measurement Error

The difference between the response coded and the true value of the characteristic being studied for a respondent.

Non-Response

Failure of some respondents in the sample to provide responses in the survey.

Transaction

A transaction refers to the specific time a beneficiary interacts with the VA that impacts the beneficiary’s journey and their perception of VA’s effectiveness in caring for beneficiaries.

Response Rate

The ratio of participating persons to the number of contacted persons. This is one of the basic indicators of survey quality.

Sample

In statistics, a data sample is a set of data collected and/or selected from a statistical population by a defined procedure.

Sampling Error

Error due to taking a particular sample instead of measuring every unit in the population.

Sampling Frame

A list of units in the population from which a sample may be selected.

Reliability

The consistency or dependability of a measure. Also referred to as standard error.



C. Application to Veterans Affairs

Customer experience and satisfaction are usually measured at three levels to: 1) provide enterprises the ability to track, monitor, and incentivize service quality; 2) provide service level monitoring and insights; and 3) give direct point-of-service feedback. This measurement may bring insights and value to all stakeholders at VA. Front-line VA leaders can resolve individual feedback from beneficiaries and take steps to improve the customer experience; meanwhile VA executives can receive real-time updates on systematic trends that allow them to make changes.

Part II – Methodology

A. Target Population, Frame, and Stratification

The target population of the Education Service Survey is defined as any beneficiary with one of the following interactions with VBA education benefits (see Table 1): beneficiaries who applied for an education benefits program OR had their school enrollment certified OR received a tuition/stipend payment in the past two to four weeks are eligible for participation.

The sample frame is prepared by extracting population information directly from the VBA Enterprise Data-Warehouse (EDW). These extracts are also used to obtain universe figures for the sample weighting process. All qualifying Montgomery GI Bill beneficiaries will be contacted (a census). For Post-9/11 GI Bill beneficiaries, a random sample of beneficiaries will be sampled. The education beneficiary is the primary sampling unit and is randomly selected from the population according to a stratified design with a fixed allocation. The strata consist of Education Benefit and Interaction Type, as listed in Table 1. These strata are defined explicitly and contain allocation targets which may fluctuate with monthly changes in the population due to the seasonality of the university related coursework. For instance, more education interaction may be observed prior to the beginning of each semester than other times of the year. To ensure demographic representation, the sampling within each stratum is also proportional with regard to Age Group and Gender, and additional balancing variables (e.g. geographic region) can be added if deemed appropriate

Table 1. Target Population

Survey Type

(Measured Across All Benefits)

Education Benefit Type (Chapter)

(Measured Across All Interactions)

Application

Enrollment

Payment Receipt

Post-9/11 GI Bill (Ch. 33)

Montgomery GI Bill (Ch. 30 and Ch. 1606)



  1. Sample Size Determination

To achieve a certain level of reliability, the sample size for a given level of reliability is calculated below (Lohr, 1999):

For a population that is large, the equation below is used to yield a representative sample for proportions:

where

  • = is the critical Z score which is 1.96 under the normal distribution when using a 95% confidence level (α = 0.05).

  • p = the estimated proportion of an attribute that is present in the population, with q=1-p.

  • Note that pq attains its maximum when value p=0.5 or 50%. This is what is typically reported in surveys where multiple measures are of interest. When examining measures closer to 100% or 0% less sample is needed to achieve the same margin of error.

  • e = the desired level of precision or margin of error. For example, for the Post-9/11 GI Bill survey the targeted margin of error is e = 0.03, or +/-3%.

For a population that is relatively small, the finite population correction is used to yield a representative sample for proportions:

Where

  • = Representative sample for proportions when the population is large.

  • N = Population size.



The margin of error surrounding the baseline proportion is calculated as:

Where

  • = 1.96, which is the critical Z score value under the normal distribution when using a 95% confidence level (α = 0.05).

  • N = Population size.

  • n = Representative sample.

  • p = the estimated proportion of an attribute that is present in the population, with q=1-p.



Table 2 depicts the population and sample figures for the education benefit population. The Sample sizes are calculated to ensure monthly reports have at least 3% Margin of Error at a 95% Confidence Level for the Post-9/11 GI Bill for each contact type. This represents a standard for reliability widely used in the survey industry (Lohr, 1999). The education survey aims to collect data on approximately 4,046 respondents every month, on average. Due to the seasonality of university-related coursework, monthly sample sizes will fluctuate over the year.

For this study, we assume a fairly low response rate of 5%. This is due to past experience with surveying this relatively young population for the Education Contact Center (ECC) Survey. Because of non-response, VEO will initiate contact with about 81,000 beneficiaries per month to attain these targets

Table 2A. Target Population Figures

Education
Benefit

Chapter

CY 2019
Annual Transactions**

CY 2019 Monthly Transactions

Est. Monthly Transactions with Email on Record

Est. Monthly Transactions w/ Email after Subtractions

Proposed Monthly Respondent Size*

Monthly Recruitment Sizes

Post-9/11 GI Bill

33

5,661,100

471,758

430,595

344,476

3,300

66,000

Montgomery GI Bill

30/ 1606

95,706

7,975

6,912

5,530

277

5,530

*Average Month assuming 5% response rate

**Transaction data is extracted from the Enterprise Data Warehouse by a VBA data analyst. These numbers provide total transactions with applications, enrollments, and payments. Generally, the number of yearly transactions is about 4-5 time greater than the number of beneficiaries.


Table 2B. Monthly Sample Targets

Education
Benefit

Chapter

Total Monthly Recruitment Sizes

Application

Enrollment

Payment

Post-9/11 GI Bill

33

66,000

22,000

22,000

22,000

Montgomery GI Bill

30/ 1606

5,530

3,054

1,595

881


Table 2B. Biweekly Sample Targets

Education
Benefit

Chapter

Total Monthly Recruitment Sizes

Application

Enrollment

Payment

Post-9/11 GI Bill

33

30,450

10,150

10,150

10,150

Montgomery GI Bill

30/ 1606

2,545

1,406

734

406

Note: The monthly email population sizes for the Montgomery GI Bill programs will vary from month-to-month because all beneficiaries who meet the eligibility criteria are contacted to participate in the survey.


Table 2C. Expected Monthly Responses

Education
Benefit

Chapter

Total Monthly Responses

Application

Enrollment

Payment

Post-9/11 GI Bill

33

3,300

1,100

1,100

1,100

Montgomery GI Bill

30/ 1606

277

153

80

44


The sample will be drawn using a systematic sampling methodology. This statistical valid approach allows the team to balance the sample across several variables such as age, gender, and geographic location, if desirable. This balancing variable are often referred to as implicit strata. In the coming wave, the VEO team will begin to leverage this capability because, though the effect on margin of error is difficult to measure, this methodology has been proven to improve the accuracy of estimates, stabilize weights, and reduce the variability that make trends difficult to interpret.

Each email address encountered is validated in several ways:

  • Validation that the email address has a valid structure

  • Comparison with a database of bad domains

  • Correction of common domain spellings

  • Comparison with a database of bad emails including

    • Opt outs

    • Email held by multiple beneficiaries

  • Comparison to a database of valid TDLs (e.g. “.com”, “.edu”)

Beneficiaries’ email addresses come from one of three sources prioritized in the following order. Validated email addresses provided in the sample file are prioritized first. If no valid email address is available, the second source is the VBA’s Enterprise Data Warehouse (EDW) followed by the VHA’s Corporate Data Warehouse (CDW)—non-Veteran emails must come from the original sample files. The resulting over sampling of Veterans will be adjusted through weights.


  1. Data Collection Methods

Recruitment occurs every two weeks for all Benefit Types (chapters) and Survey Types except for Chapter 30 and Chapter 1606 enrollment beneficiaries—data for this Survey Type and these Benefit Types are only available on a monthly basis; recruitment will therefore occur on a monthly data for this Survey Type and these Benefit Types. VEO uses data extracted directly from the VBA data warehouse (EDW). Beneficiaries will have two weeks to complete the survey. A reminder email is sent after one week to non-respondents, to remind them that the survey is available for another week. Once the beneficiary completes the survey, their response data is immediately available within Veterans Signals (VSignals).

Table 3. Survey Mode

Benefit and Survey Type

Mode of Data Collection

Recruitment Method

Time After Transaction

Recruitment Period

Montgomery GI Bill (Chapter 1606 & Chapter 30) Enrollment Survey

Online Survey

Email Recruitment

Up to one month after Education Service Encounter

14 Days

(Reminder after 7 Days)

All Other Benefits and Survey Types

Online Survey

Email Recruitment

Up to two weeks after Education Service Encounter

14 Days

(Reminder after 7 Days)

  1. Reporting

Researchers will be able to use the Veteran Signals (VSignals) system for interactive reporting and data visualization. VA employees with a PIV card may access the system at https://va.voice.medallia.com/sso/va/. Access to the Education Dashboards within VSignals will be approved by the Education Service stakeholders. The scores may be viewed by Age Group, Gender, and self-reported Race/Ethnicity in various charts for different perspective. They are also depicted within time series plots to investigate trends. Finally, filter options are available to assess scores at varying time periods and within the context of other collected variable information.

Recruitment is once every two weeks for most Survey Types and Benefit Types, but the sample is optimized for monthly analysis. Therefore, results should be analyzed where at least four weeks (2 waves) of data are available. Short interval estimates are less reliable for small domains, (i.e., VAMC-level) and should only be considered for aggregated populations. Monthly estimates will have larger sample sizes, and therefore higher reliability. Estimates over longer periods are the most precise but will take the greatest amount of time to obtain and are less dynamic in that trends and short-term fluctuation in service delivery may be missed. Users examining subpopulations should be particularly diligent in assuring that insights stem from analysis with sufficient sample in the subpopulations being examined or compared.

  1. Quality Control

To ensure the prevention of errors and inconsistencies in the data and the analysis, quality control procedures will be instituted in several steps of the survey process. Records will undergo a cleaning during the population file creation. The quality control steps are as follows.

  1. Records will be reviewed for missing sampling and weighting variable data. When records with missing data are discovered, they will be either excluded from the population file or put into separate strata upon discussion with subject matter experts.

  2. Any duplicate records will be removed from the population file to both maintain the probabilities of selection and prevent the double sampling of the same beneficiary.

  3. Invalid emails will be removed.

The survey sample loading and administration processes will have quality control measures built into them.

  1. The survey load process will be rigorously tested prior to the induction of the survey to ensure that sampled beneficiaries are not inadvertently dropped or sent multiple emails.

  2. The email delivery process is monitored to ensure that bounce-back records will not hold up the email delivery process.

The weighting and data management quality control checks are as follows:

  1. The sum of the weighted respondents will be compared to the overall population count to confirm that the records are being properly weighted. When the sum does not match the population count, weighting classes will be collapsed to correct this issue.

  2. The unequal weighting effect will be used to identify potential issues in the weighting process. Large unequal weighting effects indicate a problem with the weighting classes, such as a record receiving a large weight to compensate for nonresponse or coverage bias.

  1. Sample Weighting, Coverage Bias, and Non-Response Bias

Weighting is commonly applied in surveys, to adjust for nonresponse bias and/or coverage bias. Nonresponse is defined as failure of selected persons in the sample to provide responses. This is observed virtually in all surveys, in that some groups are more or less prone to complete the survey. The nonresponse issue may cause some groups to be over- or under-represented. Coverage bias is another common survey problem in which certain groups of interest in the population are not included in the sampling frame. The reason that these beneficiaries cannot participate is because they cannot be contacted (no email address available). In both cases, the exclusion of these portions of beneficiaries from the survey contributes to the measurement error. The extent that the final survey estimates are skewed depends on the nature of the data collection processes within an individual line of business and the potential alignment between beneficiary sentiment and their likelihood to respond.

Survey practitioners recommend the use of sample weighting to improve inference on the population so that the final respondent sample more closely resembles the true population. It is likely that differential response rates may be observed across different age and gender groups. Weighting can help adjust for the demographic representation by assigning larger weights to underrepresented group and smaller weights to over-represented group. Stratification can also be used to adjust for nonresponse by oversampling the subgroups with lower response rates. In both ways of adjustments, weighting may result in substantial correction in the final survey estimates when compared to direct estimates in the presence of non-negligible sample error.

The Education Service Survey will also rely on what are often referred to as design weights—weights that correct for disproportional sampling where respondents have different probabilities of selection. Therefore, the weights are applied to make the explicit strata (the Survey Type and Benefit Type) proportional to the number of beneficiaries.

Weights are updated live within the VSignals reporting platform1. Proportions are set based on the monthly distribution of the previous month.2

If we let wij denote the sample weight for the ith person in group j (j=1, 2, and 3), then the CW formula is:

As part of the weighting validation process, the weights of persons in an age and gender group are summed and verified that they match the universe estimates (i.e., population proportion). Additionally, we calculate the unequal weighting effect, or UWE (see Kish, 1992; Liu et al., 2002). This statistic is an indication of the amount of variation that may be expected due to the inclusion of weighting. The unequal weighting effect estimates the percent increase in the variance of the final estimate due to the presence of weights and is calculated as:

where

  • cv = coefficient of variation for all weights .

  • s = sample standard deviation of weights.

  • = sample mean of weights, ij.




  1. Quarantine Rules

VEO seeks to limit contact with Veterans as much as possible, and only as needed to achieve measurement goals. These rules are enacted to prevent excessive recruitment attempts upon Veterans. VEO also monitors Veteran participation within other surveys, to ensure Veterans do not experience survey fatigue. All VEO surveys offer options for respondents to opt out, and ensure they are no longer contacted for a specific survey.

Table 5. Proposed Quarantine Protocol

Quarantine Rule

Description

Elapsed Time

Repeated Sampling for the Education Survey

Number of days between receiving one invite and receiving another for the Education Service Survey.

4 weeks

Other Surveys

Veterans engaged that have recently completed other VEO surveys will not be selected for 30 days.

30 Days

Opt Outs

Persons indicating their wish to opt out of either phone or online survey will no longer be contacted.

Indefinite



Part III – Assumptions and Limitations

Coverage Bias due to Email-Only Data Collection

Since the Education Service Survey is email-only, there is a segment of the population of VBA recipients that cannot be reached by the survey. This will correspond to persons that lack access to the Internet, and those who do not have an email address, or elect to not share their email address with VBA. Such beneficiaries may have different levels of general satisfaction with their service they received. Moreover, email addresses are currently obtained from VHA health records, and this process may also contribute to coverage bias because only Veterans who happen to have accessed VA Healthcare in the last are contacted.

Data Availability for Chapter 1606 and 30 Enrollment Beneficiaries

As described in this document, VEO will send survey invitations on a biweekly basis for most Chapters and Survey Type. However, data for the Montgomery GI Bill programs (Chapter 1606 and Chapter 30) is only available on a monthly basis to VEO. Therefore, VEO will only be able to send monthly survey invitations to these beneficiaries.


Appendix 1. List of Data Extraction Variables


Variables

 SURVEY_TYPE

 FIRST_NM

 LAST_NM

 BENEFIT

 BENE_TYPE

 SERVICE_DATE

 RPO

 DOB 

 GENDER

 POSTAL_CODE

 EMAIL


Appendix 2. Survey Questions


Applying for Education Benefits

  1. After submitting my benefits application, I understood the education benefits I was entitled to. 

  1. I found the process of applying for my benefits to be easy.

  1. After submitting my application, I understood how and when I would receive my benefits.

  1. I found the GI Bill Comparison Tool useful when planning my budget for school. 

  1. I understood how to get information about the status of my education benefits application.

  1. After I submitted my application for benefits, I received my Certificate of Eligibility within the expected time frame. 

  1. I trust VA to effectively administer my education benefits.


Enrolling in School

  1. I understood how to submit my Certificate of Eligibility to my school. 

  1. I was satisfied with the assistance I received from my school when submitting my Certificate of Eligibility. 

  1. I found the VA websites helpful in informing me about my education benefits.

  1. After enrolling in school, I know how and when I will receive my benefits. 

  1. I trust VA to effectively administer my education benefits. 


Receiving Education Benefits

  1. I receive my education benefits timely (i.e. Tuition & Fees, Yellow Ribbon, BAH, and Books & Supplies). 

  1. When I enrolled in school, I understood that my education benefits may vary depending on changes to my enrollment (i.e. adjusting my course level). 

  1. I understand where to find information regarding the amount of my education benefits. 

  1. If I had an issue with my education benefits, I was satisfied with the assistance that I received from VA. 

  1. After receiving my education benefits, I understand how to receive education benefits in the future.

  1. I trust VA to effectively administer my education benefits.

Appendix 3. References

2018 Annual Benefits Report. US Department of Veterans Affairs, Veterans Benefits Administration, https://www.benefits.va.gov/REPORTS/abr/docs/2018-education.pdf

Choi, N.G. & Dinitto, D.M. (2013). Internet Use Among Older Adults: Association with Health Needs, Psychological Capital, and Social Capital. Journal of Medical Internet Research, 15(5), e97

Kish, L. (1992). Weighting for unequal P. Journal of Official Statistics, 8(2), 183-200.

Lohr, S. (1999). Sampling: Design and Analysis (Ed.). Boston, MA: Cengage Learning.

Liu, J., Iannacchione, V., & Byron, M. (2002). Decomposing design effects for stratified sampling. Proceedings of the American Statistical Association’s Section on Survey Research Methods.



1 Realtime weighting may cause some distortions at the beginning of each cycle due to empty cells or random variance in small sample distributions.

2 Using previous months data is a design option for handling the problem of setting targets prior to fielding each month. An alternative design is to set targets off annualized estimates to create more stability month to month. If the population is known to fluctuate from month to month, past month population estimates may not be the optimal solution.

i Some of the data, Ch 30 and Ch 1606 Enrollment data in particular is only available once a month, therefore some invites will be sent to customers with transaction within the previous month



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSoldan, Bridget M. (BAH)
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy