1230-0NEW CUSSAT Supporting Statement Part B_final

1230-0NEW CUSSAT Supporting Statement Part B_final.docx

Office of Disability Employment Policy Technical Assistance Centers Customer Satisfaction Study

OMB: 1230-0013

Document [docx]
Download: docx | pdf

A Study of Customer Satisfaction with Office of Disability Employment Policy (ODEP) Technical Assistance (TA) Centers

OMB No. 1230-0NEW

April 2018

A Study of Customer Satisfaction with ODEP Technical Assistance (TA) Centers

Supporting Material for OMB Clearance

Part B. Statistical Methods

(used for collection of information employing statistical methods)

  1. Respondent Universe and Sampling Methods

The Department of Labor’s (DOL) Office of Disability Employment Policy (ODEP) established five Technical Assistance (TA) Centers to serve a diverse set of purposes, functions, and customers. Operating with grants funded by ODEP, these Centers assist employers, federal agencies, state governments, non-profits, individuals with disabilities, and others with technical assistance and policy development concerning the integration of people with disabilities into employment. The purpose of this evaluation is to study the level of customer satisfaction with the TA centers. Specifically, the objectives are to determine the extent to which customers are satisfied with the TA provided by the Centers and to document the processes and methods used by the TA Centers to encourage the adoption and implementation of ODEP’s policies and practices by targeted and untargeted customers. The study population includes customers of the following five Technical Assistance (TA) Centers established by the Department of Labor’s (DOL) Office of Disability Employment Policy (ODEP):

  • The Viscardi Center, which supports the Employer Assistance Resource Network (EARN);



  • The Institute of Educational Leadership, which houses the National Collaborative on Workforce and Disability for Youth (NCWD/Y);



  • West Virginia University, which maintains the Job Accommodation Network (JAN);



  • Rehabilitation Engineering and Assistive Technology Society of North America, which supports the Partnership on Employment and Accessible Technology (PEAT); and



  • The National Disability Institute, which houses the National Center on Leadership for the Employment and Economic Advancement of People with Disabilities (LEAD).

Customers may include employers, federal agencies, state governments, non-profits, individuals with disabilities, and others. Table B-1 provides an overview of the four main data collection activities. In this document, the term ‘study team’ refers collectively to DOL’s principal investigator and evaluation coordinator, other DOL staff who are supporting the evaluation, and DOL’s contractor, Westat.

The Pulse Survey will collect data on the level of customer satisfaction with various TA events across all customers. The purpose of the Pulse Survey is to collect immediate reactions to the interaction with a Center for a specific question or concern, thereby getting a “pulse” or indicator of how the Centers are doing for different types of TA events. This brief, easy-to-complete online survey will be administered to all customers who have been recently added to the customer database. Customers will be added to the database within 48 hours after the TA Center has provided a response to the customer’s request for technical assistance.

The In-Depth Survey will collect more detailed information on various aspects of customer satisfaction and utility of TA and policy received. This annual survey will target customers receiving intensive TA (e.g., an ongoing or contractual client), frequent customers (e.g., receive TA services three or more times per year), or customers involved in ongoing networking events. The study team will administer this survey via the web and mail a hard copy paper-pencil survey to non-respondents.

The study team will conduct the annual qualitative Customer Interviews by telephone with a subset of customers to collect richer, more detailed information on customer satisfaction and utility of TA and policy received within specific settings of different organizations. Respondents for these interviews will include customers receiving policy dissemination, namely customers representing employers, government agencies, and community based organizations. We will also conduct qualitative Staff Interviews by telephone each year with Center directors or a team of Center staff to get their perspective on issues associated with perceived adoption and implementation of ODEP policies and practices.

Table B-1. Data Collection Activities

Data Source

Respondents

Frequency

Mode

Pulse Survey

All customers

Ongoing, approximately 48 hours after TA event initiated

Web

In-Depth Survey

  • Customers receiving intensive TA

  • Frequent customers

  • Customers involved in networking events

Annually

Web with mail follow-up

Customer Interviews

Customers representing employers, government agencies, and community based organizations receiving policy dissemination

Annually

Telephone

Staff Interviews

Center directors and staff

Annually

Telephone



Pulse Survey

Given the purpose of the Pulse Survey is to collect data to assess immediate reactions to customer-initiated TA events, the study team will administer the survey approximately 48 hours after the start of a TA event and ask questions specific to the TA event itself. TA Center staff will be trained to enter all TA events that are “customer-initiated” into the customer database, and all such events will be included in the Pulse Survey. Hence, the sampling unit for the Pulse Survey is TA event, not customer and the survey will essentially be a census of all TA events included in the database. Center staff will not enter ineligible TA events into the database that involve no interaction with customers (e.g., website visits) or “Center-initiated” events that involve minimal to no interaction with customers (e.g., dissemination of materials at conferences, routine dissemination of information through a listserv). Because such events are likely to be inadequately robust enough on which to make critical assessments, we will not include them in the Pulse Survey.

The study team will administer the Pulse Survey by email, automatically generated by the database’s survey management system based on the TA event date. Hence, the survey will not include those customers who do not provide an email address. We expect the proportion of customers without an email address to be small. A review of JAN’s customer database indicated that more than 90 percent of customers provide an email address. Conducting the survey approximately 48 hours after the TA event date will allow time for the Centers to respond to specific requests while still being close enough to TA event to elicit immediate reactions.

Because the sampling unit for the Pulse Survey is a TA event, the study team will invite customers who initiate a TA event more than once to participate in the Pulse Survey multiple times, perhaps once per every TA event. If a customer participates in many TA events in a year, they will be asked to participate in the Pulse Survey many times. We queried the Centers to obtain rough estimates of the proportion of customers who are frequent users, meaning they participate in three or more TA events in a year. Four Centers that responded estimated that 0-25 percent of their customers were frequent users, suggesting that the majority of customers only participate in one or two events per year. To reduce respondent burden on frequent users, we will limit the number of Pulse Survey administrations to one customer within a six-month period. We will always include the first TA event within a six-month period for all customers. The study team will then sample all subsequent TA events for that same customer within that six-month period at a rate of 0.5.

Based on rough estimates of counts of TA events provided by the Centers, we anticipate more than 86,000 TA events will be eligible for the Pulse Survey in a year. Based on the estimates provided, an expected 39 percent of the TA events will be one-on-one TA events over the phone, 26 percent will be one-on-one TA provided by email, 14 percent will be trainings provided via the web, and the remaining 21 percent will be from various other types of TA.

During an earlier phase of this project, we conducted interviews with directors of five of the TA centers to inform an evaluation feasibility report for ODEP. Part of our discussion with Center directors focused on their current efforts to obtain satisfaction data from their customer base. We inquired about ongoing or past customer surveys conducted by the Centers, and the range of response rates obtained. Center directors reported that response rates for their customer satisfaction surveys varied based on mode and type of TA event, and that rates ranged from 5.5 to 85 percent, depending on these factors.

While there are a number of sources from the academic and gray literature that might be used to generate an estimate of expected response rates to the Pulse Survey and the In-Depth Survey, there no examples of published studies in which the respondents are actual users of ODEP’s TA Centers. By contrast, although the Centers have not published any studies to date based on their customer survey efforts, their experience regarding the range of response we can expect for the Pulse and In-Depth Surveys is more relevant than estimates we might derive from the current literature. This is because the Centers are surveying the exact population that we will survey of the evaluation. For this reason, we have relied on information provided by the Centers during our feasibility research to develop our estimates of expected response rates, rather than the more general customer satisfaction survey literature. Based on our discussions with Centers, we expect a response rate of 5-15 percent for the Pulse Survey. Assuming 86,000 events per year, and accounting for about 10 percent missing email addresses and the subsampling of frequent users, a 5-15 percent response rate would yield an expected 3,700 to 11,610 responses to the Pulse Survey in a year. (Response rate estimates for the In-Depth Survey are discussed in the next section).

The customer database will retain variables such as the type of customer (employer, state agency, nonprofit, individual, etc.), the type of TA event (one-on-one TA over the phone, one-on-one TA by email, training via web, etc.), and the substantive topic of the TA event (ADA, technology, etc.). In addition to using these variables for subgroup analyses of customer satisfaction, because they will be available for every TA event we can compare the characteristics of respondents to the survey with those of non-respondents to determine whether there are any significant differences between them.

In-Depth Survey

The purpose of the In-Depth Survey is to assess the satisfaction of customers who have more intense and longer term relationships with the TA Centers and to assess satisfaction with participation in networking and collaboration TA events. Questions will address satisfaction with the Centers overall but will not address specific TA events. Additionally, the questionnaire for the In-Depth Survey will include limited questions regarding policy dissemination.

The sampling unit for the In-Depth Survey is customer. Specifically, customers eligible for the In-Depth Survey must fall into one of three categories. The first category includes customers for which the Centers have ongoing or contractual relationships and thereby receive intensive TA. DOL will only survey the persons considered to be the points of contact within an agency or organization that has an ongoing relationship with the Center. The study team will identify these customers through lists provided by the Centers. The second category includes customers who are considered “frequent users” of the TA Centers, that is, customers who initiate TA events three or more times within a year. The study team will identify these customers using the data stored in the customer database. The third group includes customers involved in networking and collaboration TA events. We will also identify these customers using the customer database.

The In-Depth Survey will be a census, with all eligible customers included in the sample. It is possible that a customer will meet more than one condition for inclusion into this survey. The study team will de-duplicate these categories (ongoing relationships, frequent users, and those involved in networking and collaboration) within a Center; however, we will not de-duplicate the sampling frame across Centers. Thus, if a customer interacts with more than one Center, DOL will invite them to participate in multiple In-Depth Surveys.

Centers provided rough estimates of the number of customers who would be eligible for the In-Depth Survey. The Centers indicated that there are 84-90 ongoing relationships currently, each with up to three points of contact, for a total of 84-270 customers eligible for the In-Depth Survey. Additionally, we asked Centers to indicate what percentage of customers are frequent users. Four Centers responded that 0-25 percent of customers are frequent users. Assuming the remaining Center is similar; there would be approximately 5,200-5,800 frequent users per year. Three Centers indicated that they conduct networking events, for an estimated 500 customers per year total.

Because our data collection strategy for the In-Depth Survey includes extensive effort to follow up with non-respondents, we anticipate a much higher response rate for the In-Depth Survey relative to the Pulse Survey. A response rate of 40-60 percent is a reasonable estimate for this design. During interviews conducted during the Phase I Feasibility Study, JAN reported obtaining a 47 percent response rate for similar customer satisfaction surveys that they have conducted. A fifty percent response rate would yield approximately 42-135 responses from the ongoing relationship group, 2,600-2,900 responses from frequent users and about 250 networking customers per year.

Customer Interviews

Another form of data collection will include Qualitative Interviews with a strategically selected stratified sample of customers. We plan to conduct up to 24 interviews per year stratified by three types of customers receiving the bulk of TA (excluding individuals): employers, government agencies, and community based organizations. Thus, approximately eight interviews will be conducted with each group. We deliberately exclude individuals so as to gauge the utility and implementation of TA and policy within specific settings of different organizations.

The study team will purposively select interviewees within each customer type based on their responses to In-Depth Survey questions. We will select up to four respondents that rate customer satisfaction high to obtain success stories and we will select up to four respondents that rate customer satisfaction low to explore how TA and policy dissemination can be improved in the future. The range of organizations participating in the interviews will be further determined by size of organization, mission, and number of internal (organization-based) customers. For example, with respect to government agencies, we would strive to generate a sample of federal, state and local agencies, addressing employment, education, and disability that vary on the number of internal customers receiving TA, with half who report high satisfaction and half reporting low satisfaction

Staff Interviews

The study team will also conduct annual Qualitative Interviews with all directors of all five TA Centers (and additional staff identified by the Center director as appropriate).

Table B-2 summarizes the estimated numbers described above for the universe of entities covered by this data collection request.



Table B-2. Estimate of the Universe of Entities for the Proposed Annual Data Collection Activities

Data Source

Population Entity

Maximum Estimated

Number of Entities in Universe

Estimated Response Rate

Maximum Estimated Number of Responses

Pulse Survey

TA event

86,000 TA events

15 percent

11,610

In-Depth Survey

Customers receiving intensive TA

270 customers receiving intensive TA

50 percent

135

Frequent customers

5,800 frequent customers

2,900

Customers involved in networking events

500 customers involved in networking events

250

Customer Interviews

Customers representing employers, government agencies, and community based organizations who responded to In-Depth Survey

2,628 customers who responded to In-Depth Survey from which up to 24 will be purposively selected

100 percent

24

Staff Interviews

Center directors and staff

5 Center directors and 5 additional staff

100 percent

10



  1. Procedures for the Collection of Information

The universe of customers engaging in “customer-initiated” TA events will be invited to participate in the longitudinal survey. Because the sampling unit for the Pulse Survey is a TA event, The study team will invite customers who initiate a TA event more than once to participate in the Pulse Survey multiple times, perhaps once per every TA event. If a customer participates in many TA events in a year, they will be asked to participate in the Pulse Survey many times. To reduce respondent burden on frequent users, the team will limit the number of Pulse Survey administrations to one customer within a six-month period. We will always include the first TA event within a six-month period for all customers. We will then sample all subsequent TA events for that same customer within that six-month period at a rate of 0.5.

No statistical methods are needed for stratification and sample selection for the In-Depth Survey. The universe of eligible customers for this survey (i.e., ongoing or contractual relationships; frequent users; those involved in networking and collaboration activities) will be invited to participate.

No statistical methods are needed for stratification and sample selection for the qualitative interviews. Purposive selection will be used for the Customer Interviews. Respondents to the Customer Interviews will be selected to provide variation by organization type (i.e., employers, government agencies, and community-based organizations) and satisfaction level (high versus low).

No statistical methods are needed for stratification and sample selection for the Center Interviews because directors from all five TA Centers will be invited to complete those interviews.

Because both surveys are a census of eligible customers, no estimation procedures are needed. There are no unusual problems requiring specialized sampling procedures. There is no use of less than annual data collection cycles because of the relatively short duration of the currently funded operational period of the five TA Centers included in this study.

  1. Methods to Maximize Response Rates and Deal with Nonresponse

The study team will administer the In-Depth Survey once per year over an eight-week period, first inviting eligible customers to complete the survey via web. To maximize response rates, we will also send weekly email reminders to non-respondents with valid email addresses. In addition, three weeks after we send the initial email invitation, we will send a mail survey to all non-respondents, including those without email addresses. The team will also send a follow up reminder postcard to non-respondents two weeks after sending the mail survey.

Weighting Procedures

We will weight all survey data in order to correct for differential rates of response across sampling strata. Because every customer eligible for the In-Depth survey is included in the sample, all sample members will have a base weight of one. For the Pulse survey, most events eligible for the survey are included in the survey and thus most events will have a base weight of one. For the few frequent users in which a subsampling rate of 0.5 was applied, we will assign a base weight of two.

We will adjust base weights for both studies for nonresponse. Nonresponse includes failure to respond as well as customers for which contact information is not available. We will use variables available in the customer database to create nonresponse adjustments. These variables are useful for examining nonresponse because they are available for all sample units, both respondents and nonrespondents. Variables for the Pulse survey are available at the event level (how the event was initiated, date of event, substantive issue), the customer level (customer type, geographic information, frequency of use of the Center), and the Center level (the Center that provided the TA). For the In-depth survey, customer level and center level variables are available for nonresponse adjustment. We will employ a decision tree algorithm, such as CHAID, to determine variables from the customer database associated with response. We will use variables found to be associated with response to create heterogeneous cells of respondents and nonrespondents. We will then adjust the base weights of respondents to account for nonrespondents within the same cell, such that the sum-adjusted weights of respondents within a cell will be equal the sum of the base weights for respondents and nonrespondents.

Nonresponse Bias Analysis

We will conduct a nonresponse bias analysis on the final dataset, to investigate the potential for bias in survey estimates due to different types of individuals having different response rates. The type of nonresponse to be analyzed will be unit-level, or entire-questionnaire, nonresponse. Using t-tests and chi square tests, we will compare the weighted estimates of each of the variables available on the customer database between non-respondents and respondents for both the Pulse and In-depth surveys. We will use customer database variables in all tests rather than survey variables, because they are available for all sample units regardless of whether or not they responded. We will conduct the nonresponse bias analysis twice, once with base weights and then with the final weights, in order to identify biases not corrected for by the weighting adjustments. Base weights provide population estimates prior to any adjustments made to correct for nonresponse, and will be a good indicator of the composition of the respondents. The final weights included in the dataset used for analysis will include these adjustments for non-response.

Missing Data

Record-level and item-level missing data will not be replaced. No imputation will be conducted for the surveys. Records or items with missing data will be excluded from the analysis. Records with no data (i.e., non-respondents) will be removed from the analytical data set.

  1. Test of Procedures or Methods to be Undertaken

To address the research questions, we will use a combination of quantitative analysis (e.g., descriptive statistics of survey data, significance testing of differences in estimates between subgroups, regression analysis) content analysis of the qualitative data, and a review of administrative documents.

Quality, relevance and utility are important TA outputs that all five Centers must deliver effectively. We operationalize and capture TA customer perceptions of these outputs on both the Pulse Survey and the In-Depth Survey, using multiple items. Quality indicators, for example, include measures of satisfaction and timeliness of service delivery. Utility indicators include customer reports of the usefulness of information received and the extent to which they report putting that information into practice. Indicators of relevance include items assessing the extent to which information received addressed customers’ specific needs and the staff’s ability to answer specific concerns. These indicators, and others, will constitute dependent variables (DV) of interest in the quantitative analysis.

For all proposed DVs, we will first examine their frequency distributions, which we will present using graphs, tables and other visual displays. For example, to address sub-question 2— “To what extent do customers perceive that TA Center assistance has helped them resolve their issue or challenge?”—we will display the percent of customers who report ending the TA interaction with their concern resolved or not resolved. We will present these results for the overall sample but also examine variation by subgroups, such as customer segment and mode of contact. We will test for significant differences between groups using techniques appropriate to the type of data. For example, the data shown in the chart below are categorical, so we will test for significant differences in patterns of responses using chi-square. To test for significant differences for interval data—such as comparing mean satisfaction scores between customer groups—we will use t-tests.

Minimum Detectable Effects (MDEs) of the population estimates between groups and Minimum Detectable Odds Ratios (MDORs) for logistic regression with a single binary covariate were used to assess the reliability of the study results from both the Pulse Survey and the In-Depth Survey. The MDE is the smallest difference of an estimate between two groups that is statistically significant under certain assumptions. The MDOR is the smallest odds ratio between the treatment group and the control group associated with the binary covariate that one can reject the null hypothesis: the coefficient of the binary covariate, β=0. Tables B-3 and B-4 below present the MDEs for chi square tests comparing proportions, assuming two variables, both coded dichotomously. In Tables B-3 and B-4, the dependent variable is “issue resolution” – (0 = “issue not resolved”; 1 = “issue fully or partially resolved”). For Tables B-5 and B-6, the dependent variable is “customer satisfaction”— (0 = “not satisfied”; 1 = “satisfied”).

Because we expect the two surveys to have different sample sizes, we present MDEs for the Pulse Survey and the In-Depth Survey separately. Each row includes MDEs and MDORs for independent variables with different levels of prevalence in the sample. MDEs and MDORs will be smaller for variables in which the distribution of the characteristic in the samples (e.g., gender) are relatively equal (i.e., 50-50). In the examples, both measures were calculated with 80 percent power using a two-sided test at the 5 percent level of significance. The impact of the unequal weights due to nonresponse adjustment on variances is reflected through the inflation factor, design effect (DEFF). The lower the response rate the higher the DEFF is expected to incur, and thus the higher MDE and MDOR. Because of the low anticipated response rate for the Pulse Survey, the DEFF was conservatively assumed at 2.0. The In-Depth Survey assumed the DEFF at 1.3. Although the Pulse Survey is expected to generate a low response rates, the Centers engage in so many transactions each year (roughly 86,000) that we still expect a robust sample size will be available for statistical tests (N = 11,610).

Table B.3. MDEs for comparing a proportion estimated for a non-specific characteristic to a proportion estimated for the balance of the population for eligible entities receiving Pulse Survey, n=11,610 a


Prevalence of characteristic in population

Estimated proportion, Issue resolution

50%

25% or 75%

10% or 90%

5% or 95%

10% or 90%

6.21%

5.38%

3.73%

2.71%

20% or 80%

4.66%

4.03%

2.79%

2.03%

30% or 70%

4.06%

3.52%

2.44%

1.77%

40% or 60%

3.80%

3.29%

2.28%

1.66%

50%

3.73%

3.23%

2.24%

1.62%

a Assume the design effect due to nonresponse adjustment of weights is 2.0, which makes the effective sample size become 5,650.



Table B.4. MDEs for comparing a proportion estimated for a non-specific characteristic to a proportion estimated for the balance of the population for eligible entities receiving In-Depth Survey, n=3,285 a


Prevalence of characteristic in population

Estimated proportion, Issue resolution

50%

25% or 75%

10% or 90%

5% or 95%

10% or 90%

9.28%

8.04%

5.57%

4.05%

20% or 80%

6.96%

6.03%

4.18%

3.03%

30% or 70%

6.08%

5.26%

3.65%

2.65%

40% or 60%

5.68%

4.92%

3.41%

2.48%

50%

5.57%

4.82%

3.34%

2.43%

a Assume the design effect due to nonresponse adjustment of weights is 1.3, which makes the effective sample size become 2,527.



Table B.5. MDORs for logistic regression estimating the effect (expressed as odds ratios) of a single binary covariate on customer satisfaction among Pulse Survey respondents, n=11,610 a


Prevalence of characteristic in population

Estimated proportion, Customer satisfaction

20%

50%

70%

80%

10% or 90%

1.34

1.28

1.33

1.41

20% or 80%

1.25

1.21

1.24

1.28

30% or 70%

1.22

1.18

1.20

1.24

40% or 60%

1.20

1.17

1.19

1.22

50%

1.20

1.16

1.18

1.21

a Assume the design effect due to nonresponse adjustment of weights is 2.0, which makes the effective sample size become 5,650.



Table B.6. MDORs for logistic regression estimating the effect (expressed as odds ratios) of a single binary covariate on customer satisfaction among In-Depth Survey respondents, n=3,285 a

Population distribution of treatment group

Estimated proportion, Customer satisfaction for the control group

20%

50%

70%

80%

10% or 90%

1.52

1.46

1.57

1.74

20% or 80%

1.38

1.32

1.38

1.47

30% or 70%

1.33

1.28

1.32

1.39

40% or 60%

1.31

1.26

1.29

1.35

50%

1.31

1.25

1.29

1.34

a Assume the design effect due to nonresponse adjustment of weights is 1.3, which makes the effective sample size become 2,527.

Based on Tables B-3 through B-6, it is clear that the projected sample sizes of the Pulse Survey and the In-Depth Survey are large enough to produce stable results for the evaluation.







By design, we have included multiple survey items to capture different facets of a single construct, such as satisfaction with TA staff. Using factor analysis, we may look for opportunities to combine multiple items into a scale or index, as scales tend to have greater reliability than single items. In such cases, we will calculate measures of internal consistency/reliability to be sure that the items in the scale reflect the underlying construct (i.e., Cronbach’s alpha of .7 or greater).

An important part of answering the research questions is to test hypotheses about which factors best explain variation in perceived quality, relevance, utility and likelihood of implementation. Following a review of the descriptive data, to include the examination of subgroup variation in key outputs, we will meet with ODEP to present key interim findings and to discuss potential hypotheses about factors that might explain the differences observed. Based on the feedback from this discussion, we will use regression analysis to test hypotheses about the key drivers of the dependent measures. Both survey instruments, as well as administrative records, contain data on factors that may represent sources of variation in TA outputs. Examples include mode of TA interaction (email, phone call, in-person), frequency of interaction (intermittent vs. frequent users), and customer segment (organization representative vs. an individual requestor). An example hypotheses would take this form:

H1: Organizational users of TA will have higher ratings of satisfaction than Individual users, other things equal.

Organizational users” may have learned through experience how to get the information they require effectively, compared to a user unaffiliated with a particular organization and who may not know, initially, which questions to ask. Using regression, we would test the hypothesis by estimating the effect associated with being an organizational user on the DV, customer satisfaction (which may be a single item or a scale score based on multiple items). Our model would adjust for the influence of other factors, such as mode of interaction, by including additional variables as controls. Categorical variables (e.g., user type, mode of interaction) will be dichotomously coded (i.e., 1 = member of the designated category, 0 = all others). The equation to test the hypothesis would be:

Ŷ = β0 + β1X1 + β2X2 + e

Where:

Ŷ is the predicted value of customer satisfaction,

β0 is a constant term,

β1 is the slope, or change in the value of Ŷ associated with an increase of one in variable X1 (in this case, the “effect” of being an organizational user, other factors held constant)

β2 is the slope associated with variable X2 (a control variable, such as mode of customer contact), and

e is an error term, representing unexplained variance.

The model will likely include several control variables, but for simplicity, only one is included in the example above. In order to reject the null hypotheses, which states that is no relationship between user type and customer satisfaction, the coefficient β1 must be positive and statistically significant. Rejecting the null would allow us to say with confidence that organizational users have higher satisfaction with their TA experiences, on average, even when other factors are considered. The practical implications of such a finding might include a recommendation that TA providers would benefit from targeted refresher training that focuses on handling individual users not affiliated with the Center’s common organizational stakeholders. We will apply this general approach for all hypothesis testing.

We will also conduct quadrant analysis to help address research sub-questions focused on recommendations and prioritizing areas of TA improvement. Quadrant analysis is a tool often used in customer satisfaction research to help organizations consider which potential areas of improvement they will prioritize. For this analysis, we will plot the predictors of customer satisfaction along two dimensions: (1) the relative importance of each factor in affecting overall satisfaction, and (2) TA provider performance on each factor.

Relative importance is estimated using regression analysis to identify the “key drivers” – those variables that, when controlling for other factors, are associated with statistically significant and relatively large standardized coefficients in the model. Performance scores on each item come from the survey data. We would then plot the factors on the matrix, the midpoint of which is the intersection of the means of both variables. Factor in the top left quadrant are those deserving near-term attention, as they are factors that contribute the most to satisfaction but on which the Centers are not scoring well in relative terms. We can use the results of the quadrant analysis to shed light on potential targets for intervention to improve overall satisfaction and perceived implementation/adoption of ODEP policies.

Qualitative Data Analysis

We will also integrate qualitative data gathered from telephone interviews to augment and add descriptive detail and nuance to our findings regarding the quality, relevance, and utility of the TA received. Qualitative analytical techniques, such as those described below, are well suited for organizing and interpreting the data from the customer and staff interviews. The study team will use content analysis to code data according to themes, using the research questions as a framework. Descriptive codes, which simply classify data into thematic groups, will later be replaced by pattern codes after subsequent re-readings. Pattern codes indicate emergent patterns in the data and are typically used in the last stages of analysis.

We will develop an a priori content coding system based on domains of interest from the interview protocols, as well as categories for emergent codes that develop from respondents’ answers. We will identify common themes among respondents and will determine if there are pattern differences by customers. We will support all our conclusions through the integration of illustrative examples and anonymous quotes from individual respondents. Quotations provide valuable evidence from the customers themselves that strengthen the credibility of the analysis, because they generate a direct link between the more abstract content of the results and the data.

We will also apply quantitative and qualitative techniques to answer questions about the role of the TA in the perceived adoption and implementation of ODEP policies. Because each Center has a different focus and customer base, the survey data we will collect are likely to uncover unique barriers to adoption and implementation among different customer groups. We expect the Pulse survey to generate several thousand responses annually, and the In-Depth survey to generate several hundred responses per year, so sample sizes should be sufficient to generate meaningful findings regarding which strategies work best for certain types of TA recipients.

Our In-Depth survey contains several items designed to measure perceptions of TA adoption, and following our review of those results, we will use information from the qualitative interviews to understand barriers to adoption, and learn from those who successfully adopted the information and how they did it. We will integrate results of our qualitative analysis of interview data, using the techniques described in the last section, to reinforce, clarify and provide nuance to the quantitative findings. Qualitative analysis will also be the primary source of findings for certain sub-parts of Research Question 3, including those that address promising practices in TA that may lead to perceived adoption, and best methods for DOL to track whether TA customers are implementing the practices advocated by their technical assistance providers. The complex nature of these sub-questions requires a high level of probing and follow-up best obtained through the qualitative interview process.

The study team will conduct a cognitive test of questionnaire items to assess respondent understanding of the questions. The cognitive test will be conducted with no more than nine TA Center customers. The results of the tests will be used to refine the questionnaire items and minimize burden.

  1. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The individuals listed in Table B-7 consulted on statistical aspects of the design and will also be primarily responsible for collecting and analyzing the data for the agency.

Table B-7. Persons Involved in Proposed Data Collection and Analysis of Data

Name

Title and Agency

Email Address

Telephone Number

Jarnee Riley

Senior Study Director, Westat

jarneeriley@westat.com

240-453-2724

William Frey

Vice President, Westat

williamfrey@westat.com

301-610-5195

Jennifer Kali

Statistician, Westat

jenniferkali@westat.com

301-738-3588

Bradford Booth

Senior Study Director, Westat

bradfordbooth@westat.com

301-212-2151

Naomi Yount

Senior Study Director, Westat

naomiyount@westat.com

301-610-8842

Martha Palan

Senior Study Director, Westat

marthapalan@westat.com

301-738-3526





File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLaura
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy