Non-Response Bias Study Report

QPC_NR_Bias_Study_Documentation_2017 (3-23-18).docx

Quarterly Survey of Plant Capacity Utilization

Non-Response Bias Study Report

OMB: 0607-0175

Document [docx]
Download: docx | pdf


An Examination of Nonresponse Bias in the Quarterly

Survey of Plant Capacity



1. Introduction


The issue of missing data in survey research is one that presents multiple challenges to researchers and data producers. Unit nonresponse occurs when a sampled unit does not provide any response to the survey. Item nonresponse occurs when a sampled unit provides information to some, but not all questions on the survey. Since nonrespondents may differ from respondents in terms of the variables collected on the survey, the occurrence of nonresponse gives rise to concerns about bias in the survey results.


Data collected on business surveys tend to have a skewed distribution for key data variables of interest, such as sales, inventories, expenses, and production. This implies that the majority of a tabulated cell comes from a small number of large establishments. These large establishments typically are included in the sample as certainty cases (sample weight = 1) for each survey cycle and the remainder of the establishments are sampled. The establishments that are selected with higher sample weights usually contribute less to the published estimates. This data distribution forces survey managers to focus resources on the larger businesses since they are more significant to the totals than the smaller businesses. Therefore, this would imply that some level of bias is inherent in our survey processing methodology.


The Office of Management and Budget (OMB) standards for statistical surveys require planning a nonresponse bias analysis when unit response rates suggest the potential for bias to occur. The OMB guideline for achieving the goal of the standard suggests conducting a nonresponse bias analysis if the unit response rate is below their specified threshold. The Quarterly Survey of Plant Capacity Utilization (QPC) has consistently yielded unit response rates below this threshold for several reporting periods.


The QPC survey is a voluntary survey, so respondents are not required by law to respond. Historically, the unit response rates for the QPC survey have ranged from 67 to 76%, until 2014 Q2, when the rates started dropping even lower. Since 2014 Q2, unit response rates have not exceeded 53%, primarily due to the implementation of several changes for the QPC survey. While unit response rates have been low for the QPC survey, utilization rates published in the QPC survey are comparable to similar rates produced by the survey sponsor. The sponsor is very pleased with the utilization rates that are produced.


The last OMB approval for conducting the QPC survey occurred in October of 2015 and is good for three years. At the time of this last approval, OMB specified that the Census Bureau must conduct a nonresponse bias study for the QPC survey before authorizing the continuation of the survey beyond October 2018. This paper documents several nonresponse bias analysis methods that we applied for the QPC survey, as well as the corresponding results obtained from this nonresponse bias analysis. These methods included examining unit response rates (URRs) and Total Quantity Response Rates (TQRRs) for various analysis subgroups, comparing unit response rates for different quarters, and comparing respondents and nonrespondents using a frame variable (measure of size) that is available for all units on the frame. For the QPC survey, where the estimates we produce are rates, the TQRR is essentially equivalent to the coverage rate. Therefore, whenever we mention the TQRR in this document, we are referring to the coverage rate. While using these nonresponse bias analysis methods, we also considered the impact of several changes that we have implemented for the QPC survey since 2015 Q1. These changes include the introduction of a new sample, the move to all‑electronic reporting, and the use of a new letter that more clearly states the QPC survey is voluntary by moving the statement to the first paragraph of the letter instead of just mentioning it in the survey’s instructions.


This document describes methods suggested by researchers at the U.S. Census Bureau (Lineback and Thompson, 2010) for conducting nonresponse bias studies for business surveys. This document also investigates the impact that changes to the QPC survey and collection of the survey have had on response rates. These changes appear to have had a noticeable impact on the likelihood of establishments responding to the survey.


    1. Characteristics of the QPC Survey


The QPC survey is conducted by the U.S. Census Bureau and is funded by the Federal Reserve Board (FRB) and the Defense Logistics Agency (DLA). In 2008, the QPC survey replaced the Plant Capacity Utilization (PCU) survey, which was an annual survey that collected only fourth quarter data. The QPC survey began collecting quarterly data in 2008 Q1. The QPC survey is now a quarterly survey of approximately 7,500 establishments, with the primary goal of producing estimated rates of full and emergency capacity utilization. Establishments in the QPC survey have five or more paid employees in the U.S., and they are classified in selected North American Industry Classification System (NAICS) industries within manufacturing and publishing. The Census Bureau releases estimates for 93 industry groups, established by the FRB, approximately 75 days after the completion of each quarter.


Staff from the Manufacturing Surveys Statistical Methods Branch (MSSMB) selects the sample for the QPC survey every five years using the Business Register to construct the sampling frame, with updated information from the Economic Census. The sample for the QPC survey is selected using a probability-proportional-to-size (pps) sample design based on the assigned measure of size.  Sampling is controlled at the 94 industry group levels and each establishment is assigned a probability of selection based on its respective measure of size for the industry group in which it has activity.  Therefore, each establishment in the initial frame is assigned a probability of selection that is commensurate with its relative importance (based on total receipts) within the respective industry group.  The Census Bureau last implemented a new sample for the QPC survey in 2015 Q1. This sample included 7,500 establishments from a sampling frame that included approximately 190,000 manufacturing establishments and 10,000 publishing establishments.


Since the sample is redesigned every five years for the QPC survey, there is a need for sample maintenance in the intervening years.  During each survey cycle, establishments are lost through sample attrition, so something needs to be done in order to maintain the desired total sample size of 7,500 establishments.  Therefore, in each intervening year, a birth sample is selected in order to accurately reflect the universe for a given survey year and to offset the effects of this sample attrition each survey year.  The target number of establishments selected in each annual birth sample is determined by the attrition rate from the previous survey cycle.  Similar to the full sample selected every five years, each birth sample is allocated across the 94 industry groups based on attrition rates in each industry group.  This ensures that respective industry group sample sizes are maintained, while also maintaining the total sample size of 7,500 establishments.


In the QPC survey, we estimate the full production utilization rate for each industry group using only those establishments in the industry group reporting both the actual value of production and the full production estimate. We calculate simple weighted estimates of these two variables by applying the establishment’s sample weight to its respective data values and adding these weighted values across all reporting establishments in the industry group. We calculate the full utilization rate for a particular industry group by forming the ratio of the actual production weighted sum to the full production weighted sum for that given industry group. We utilize a similar procedure to estimate the national emergency production utilization rate, forming the ratio using the actual production weighted sum and the national emergency production weighted sum. For the QPC survey, the Census Bureau also publishes estimates of the average plant hours per week in operation by industry group and produces comparisons between actual and full production by industry group using various checkbox information that also is collected on the survey. This checkbox information helps to determine the primary reasons for changes in full production capability between current quarter and previous quarter, as well as the primary reasons for actual production being less than full production capability for the current quarter. We implement a stratified jackknife method of variance estimation to produce the estimates of standard error on QPC survey estimates.


For the QPC survey, in order to be classified as a respondent and be included in the calculation for the full capacity utilization rate, an establishment must report both its actual production and full production capacity for that quarter. In order to be classified as a respondent and be included for the emergency capacity utilization rate, an establishment must report both its actual production and emergency production capacity for that quarter. Currently, we do not apply any imputation or nonresponse adjustment methods for the QPC survey, so estimates are based only on reported data. Adjusting weights for nonresponse or imputing values based on a ratio of identicals at the industry group publication level would yield results identical to the estimates currently produced from the QPC survey. Using alternative methods to account for nonresponse, such as donor imputation, would require additional research to determine their effectiveness.


1.2 Changes to the QPC Survey in 2015


New Sample


Beginning for 2015 Q1, we selected a new sample for the QPC survey. In a new sample year, response rates tend to be lower because the survey is new for the majority of establishments, so there is a learning curve, or conditioning period, during which these new establishments need time to become familiar with the survey. The fact that the QPC survey is voluntary also has a negative effect on whether establishments choose to complete the survey. It usually takes several quarters to gradually build response rates back up after a new QPC sample is selected, through follow-up for nonresponse and sample maintenance to exclude establishments that fall out of scope of the survey. This always proves to be a significant challenge for the QPC survey.


Transition to Electronic-Only Reporting


Beginning for 2015 Q4, we lost a large amount of funding for the QPC survey, so we were forced to make adjustments to the program. One of the biggest changes we made for the QPC survey was to eliminate our paper survey form and move to an electronic-only survey. Instead of mailing a paper questionnaire to establishments in the QPC survey, we simply mailed a letter to establishments describing the QPC survey and providing information about the data being collected on the QPC survey. There were several roadblocks in moving to an electronic-only data collection for the QPC survey. One of these issues was that some respondents did not recognize that the letter we mailed was actually for a survey, so they did not respond. In addition, the absence of a visual, hard-copy survey questionnaire for respondents to work through when completing the survey also had a negative effect on our unit response rates.


New Requirements for Mailout Documents


Also beginning for 2015 Q4, the Census Bureau released a new template for mailout documents. One of the most significant negative effects on response rates in the QPC survey results from the fact that response to the QPC survey is voluntary. The Census Bureau has always required that all voluntary surveys state somewhere in the mailout documents that response to the survey is voluntary. Prior to 2015 Q4, the QPC survey included this statement in the survey instructions, a less prominent location. However, the new template for mailout documents, implemented for the QPC survey in 2015 Q4, specified that this statement about voluntary reporting must be included in the first paragraph of the survey letter located at the front of the mailout documents. This change, combined with the transition to electronic-only reporting for the QPC survey, contributed to a drop in our survey unit response rates. Prior to these changes, our unit response rates were around 55%. After these changes, our unit response rates dropped to just around 50%. Despite the drop in our unit response rates for the QPC survey, the quality of our estimates remains high, considering we publish rates and not level estimates.



2. Analysis of Nonresponse Bias in the QPC Survey


Lineback and Thompson (2010) suggest six different methods for examining and analyzing nonresponse bias in business surveys. This document examines two of these methods to analyze nonresponse bias in the QPC survey. First, we examine Unit Response Rates (URRs) and Total Quantity Response Rates (TQRRs) for various analysis subgroups and then compare these response rates over several different quarters. Next, we compare respondents and nonrespondents in each analysis subgroup by examining their respective weighted average measure of size (total receipts) from the sampling frame. We do not explore other suggested methods to identify potential nonresponse bias in the QPC survey at this time because they are not relevant to the survey or they are too costly to conduct.


2.1 Response Rate Analysis



Lineback and Thompson (2010) also note that response rate analysis by subgroups, using characteristics that could be building blocks in the survey sample design to define these subgroups, is useful in identifying potential nonresponse bias. For this analysis of nonresponse bias in the QPC survey, we examine response rates for subgroups based on certainty status, industry priority as defined by the FRB, and length of time in the QPC survey. First, we examine two subgroups based on certainty status, certainty cases and non-certainty cases. Next, we investigate four subgroups based on industry priority, high priority (priority 1) cases, medium priority (priority 2) cases, low priority (priority 3) cases, and cases classified in publishing industries (priority 0). Finally, we analyze seven subgroups based on the length of time in the QPC survey, each defined by the year establishments entered the QPC survey (2010 through 2016). In order to examine and compare response rates over time and to assess the potential impact of the aforementioned changes to the QPC survey, we examine response rates for these various subgroups over five different reference periods. These reference periods include 2014 Q3, 2014 Q4, 2015 Q3, 2015 Q4 and 2016 Q4.


The first response rates we examine for the QPC nonresponse bias analysis are unit response rates (URRs). We calculate the URR for each analysis subgroup using the following formula employed for economic surveys at the U.S. Census Bureau:





where,


R: represents the unweighted number of establishments that are eligible for data collection in the given statistical period and classified as respondents. In order to be classified as a respondent for the QPC survey, an establishment must report both actual and full production for the given statistical period.

E: represents the unweighted number of establishments that are eligible for data collection in the given statistical period. Chronic refusal cases are considered to be eligible, even if they choose not to participate in the survey.

U: represents the unweighted number of establishments in the given statistical period for which eligibility cannot be determined. Establishments are assumed to be active and in-scope in the absence of evidence to prove otherwise. This includes cases that are undeliverable as addressed.


An eligible reporting unit for the QPC survey is defined as an establishment for which an attempt is made to collect data in the given statistical period. This includes all establishments that are in the initial mail file for each quarter. As mentioned above, an establishment must report both actual and full production in order to be classified as a respondent for a given quarter. These cases comprise the numerator of the URR calculation. The denominator of the URR formula is equivalent to the cases mailed out for each quarter plus the chronic refusals (not mailed in ensuing quarters), minus those cases that are determined to be no longer in scope for the QPC survey, such as establishments that have ceased operations.


The next response rates we examine for the QPC nonresponse bias analysis are total quantity response rates (TQRRs). If nonresponse adjustment cells coincide with the domains for the estimates produced, such as the QPC industry groups, then the TQRR is equivalent to a coverage rate. The QPC publications already contain some information on these coverage rates for full and emergency production by industry group. We calculate the TQRR for each analysis subgroup using the following formula:



where,


i: represents a given tabulation unit, which is the same as the corresponding reporting unit for the QPC survey since these units are establishments.

NTU: represents the total number of active tabulation units in the statistical period.

rxi: represents an indicator variable for whether tabulation unit i in the statistical period provided reported data for item x that satisfied all edits. Note that an establishment must also provide data for their actual production to be a respondent for the full and emergency production coverage rates. This is because establishments must report both the numerators and denominators to be included in the ratio estimates.

mi: represents the design-weighted value of receipts for tabulation unit i.


For more information on response rates in business surveys, see http://www.census.gov/about/policies/quality/standards.html.


Initially, we examine URRs and TQRRs for the QPC survey at a total level over the course of five reference quarters, including 2014 Q3, 2014 Q4, 2015 Q3, 2015 Q4 and 2016 Q4. Appendix A shows similar results at the overall survey level for both URRs and TQRRs. The overall URRs range from 51.4 to 56.3, while overall TQRRs range from 53.9 to 57.1. The overall URR and TQRR both peak in 2015 Q3, then decline for 2015 Q4 and 2016 Q4.


The first analysis subgroups we examine are based on certainty status, either certainty cases or non-certainty cases. There are two types of certainty cases in the QPC survey. Predetermined certainty cases are establishments that we designate for inclusion in the survey before sampling. Analytical certainties are establishments that become certainty cases during the sampling process based on their relative importance to their respective industries. Appendix A shows that URRs and TQRRs are both consistently higher for certainty cases compared to non-certainty cases. Over the five reference periods we examine, the URRs range from 60.2 to 66.4 for certainty cases, peaking in 2015 Q3. The URRs for non-certainty cases range from 46.2 to 50.6, also peaking in 2015 Q3. The URRs for certainty cases range anywhere from roughly 12 to 16% higher than the respective URRs for non-certainty cases. We see similar results when we examine TQRRs. The TQRRs for certainty cases are roughly 6 to 7% higher than the TQRRs for non-certainty cases for both quarters in 2014, and these differences increase to roughly 15 to 20% higher for both quarters in 2015 and 2016 Q4. The highest TQRR we observe for non-certainty cases is 52.3 in 2014 Q3, while the highest TQRR for certainty cases is 70.3 in 2015 Q3. Certainty cases are typically larger establishments than non-certainty cases, so more attention is focused on these certainty cases during follow-up for nonresponse in the QPC survey. Therefore, the higher URRs and TQRRs we observe for certainty cases in the QPC survey actually support our expectations.


Next, we examine URRs and TQRRs for analysis subgroups based on industry priority. We compare four different priority subgroups for the QPC nonresponse bias study, including high priority industries, medium priority industries, low priority industries, and publishing industries. The FRB designates which industries are high priority, medium priority and low priority. For the purposes of this study, we combine all of the publishing industry groups into one analysis subgroup. The first thing we observe in our comparison of priority subgroups is that the URRs and TQRRs for the publishing industries are both much lower than we observe for all of the other priority subgroups. We can see from Appendix A that URRs for the publishing subgroup range from just 16.1 to 19.9, while the URRs for all of the other priority subgroups, led slightly by the medium priority subgroup, range from 50.9 to 59.7. Similarly, the TQRRs for the publishing subgroup range from just 15.2 to 24.2, also much lower than the 52.8 to 63.4 we observe for all of the other industry priority subgroups, again led slightly by the medium priority subgroup. The URRs and TQRRs for the publishing subgroup peak in the two quarters of 2014, and then drop off in 2015 and 2016. This is probably the result of selecting the new sample in 2015 Q1. The URRs and TQRRs for all of the other industry priority subgroups actually peak in 2015 Q3 and then decline. This is probably due to the changes we initiated for the QPC survey in 2015 Q4, the transition of the QPC survey to electronic-only reporting and moving the statement about the QPC survey being voluntary to a more prominent location in the survey mailout documents. Historically, we get poor response from the publishing industry groups in the QPC survey, so the lower URRs and TQRRs we observe in this study really come as no surprise.


Finally, we examine URRs and TQRRs for analysis subgroups based on how long establishments have been in the QPC survey. The analysis subgroups are defined by the year in which establishments entered the QPC survey, going back to 2010. Therefore, we are comparing seven analysis subgroups based on length of time in the QPC survey, one for each survey year 2010 through 2016. As we mentioned earlier, we redesign and select a new sample for the QPC survey every five years. We implemented new samples for the QPC survey in 2010 Q1 and 2015 Q1. In all other intervening years, we supplement the QPC survey with birth samples in order to accurately reflect the universe for a given survey year and to offset the effects of sample attrition that occurs each survey year. Therefore, the 2010 analysis subgroup represents establishments first entering the QPC survey in the 2010 QPC sample and the 2015 analysis subgroup represents establishments first entering the QPC survey in the 2015 QPC sample. All of the other subgroups represent establishments first entering the QPC survey as part of the birth samples that we select in the respective intervening years.


Appendix A summarizes the URRs and TQRRs for each of these analysis subgroups based on length of time in the QPC survey. We observe that the URRs tend to be higher for subgroups with establishments that have been in the QPC survey longer, with a few exceptions. This is probably due to a conditioning effect that occurs as establishments become more acclimated with the survey over time. Therefore, the more familiar they are with the survey, the more likely they are to respond to the survey. Also, the length of time in the survey is directly correlated with the size of the establishments since the larger establishments are more likely to be resampled, so this is probably another contributor to higher URRs for establishments that have been in the QPC survey longer. The URRs for all of these analysis subgroups peak in 2015 Q3, again, probably due to the change we made for 2015 Q4 with the statement on voluntary reporting authority. The TQRRs for the analysis subgroups based on length of time in the QPC survey do not quite exhibit the consistent pattern we observe for the URRs. While it appears from our comparison of URRs for these subgroups that conditioning may have an effect on the number of establishments responding to the QPC survey, we do not necessarily see this same conditioning effect in determining which establishments respond to the QPC survey.


Our analysis of response rates reveals that URRs and TQRRs are both consistently higher for certainty cases than for non-certainty cases. This result meets our expectations because certainty cases are generally larger establishments, and our follow-up for nonresponse targets the larger and more important establishments in their respective industries. We also observe that URRs and TQRRs are both much lower for establishments classified in publishing industries compared to establishments classified in any of the other priority industries. Historically, we have gotten poor response from establishments in the publishing industries for the QPC survey. Since many of these publishing cases are relatively smaller, our follow-up for nonresponse does not target these establishments. Meanwhile, there do not seem to be any effects on response rates based on high, medium or low priority since URRs and TQRRs are relatively similar for these industry priority subgroups. We also observe that length of time in the QPC survey has some conditioning effect on response rates, especially for the URRs. We observe that URRs are consistently higher for establishments that are in the survey longer. While there appear to be some similar conditioning effects for TQRRs, these results certainly are not as consistent. Therefore, it appears that conditioning may have an effect on the number of establishments that respond to the QPC survey, but it does not necessarily help determine which establishments will respond to the survey.


2.2 Comparison of Respondents and Nonrespondents Using Frame Variables


The sampling frame for the QPC survey contains characteristics for both respondents and nonrespondents. We can use this information to compare respondents and nonrespondents in the QPC survey for each analysis subgroup. Since we redesigned the QPC sample in 2015 Q1, the frame we use for the 2014 Q3 and 2014 Q4 reference periods is different from the frame we use for the 2015 Q3, 2015 Q4 and 2016 Q4 reference periods.


In order to compare respondents and nonrespondents in the QPC survey, we examine differences in average measure of size (value of receipts) between respondents and nonrespondents for each analysis subgroup. Just as with our response rate analysis, we examine analysis subgroups based on certainty status, industry priority, and length of time in the QPC survey. For the 2014 Q3, 2014 Q4, 2015 Q3, 2015 Q4 and 2016 Q4 reference periods, we calculate the average measure of size for respondents and nonrespondents within each analysis subgroup. Once we calculate these average measures of size, we conduct two-tailed, two-sample t-tests of equivalence of the average (mean) measure of size calculated for respondents to the corresponding value calculated for nonrespondents within each analysis subgroup in order to determine whether these differences are statistically significant. We compute the t-statistic for each analysis subgroup using the following formula:



where,


: represents the estimate of the average measure of size for response cases in analysis subgroup j

: represents the estimate of the average measure of size for nonresponse cases in analysis subgroup j

: represents the stratified jackknife variance of the difference between and in analysis subgroup j


Initially, we examine differences in average measure of size between respondents and nonrespondents at an overall survey level. Appendix B compares the average measure of size for respondents and nonrespondents within each analysis subgroup, including the overall survey level, and shows whether these respective average measures of size are significantly different (shown in bold). At the overall survey level, we can see that the average measure of size for respondents is significantly different from the average measure of size for nonrespondents for all reference quarters. The average measure of size is also consistently higher for respondents at the overall survey level. These results are expected because our nonresponse follow-up procedures target the larger establishments, so it is more likely that these larger establishments will respond to the survey.


Next, we examine average measure of size for respondents and nonrespondents for analysis subgroups based on certainty status. Appendix B shows that for certainty establishments, the average measure of size for respondents is significantly different from the average measure of size for nonrespondents for all reference quarters. Meanwhile, these differences in average measure of size for non-certainty establishments are not statistically significant for any of the reference quarters.


When we examine differences in average measure of size between respondents and nonrespondents for analysis subgroups based on industry priority, we generally see that these differences are statistically significant as well, with just a few exceptions. Appendix B shows that the only analysis subgroups that do not have a significant difference in average measure of size between respondents and nonrespondents are the high priority industries for 2014 Q3 and the publishing industries for 2015 Q3 and 2015 Q4. Due to the nature of establishments in the publishing industries, the average measures of size for respondents and nonrespondents are much smaller for the publishing industries than for the other industry priority subgroups for all reference quarters.


Finally, we examine differences in average measure of size between respondents and nonrespondents for analysis subgroups based on how long establishments have been in the QPC survey. Looking at Appendix B once again, we generally see that these differences are statistically significant. For both 2014 Q3 and 2014 Q4, differences are significant for all of these subgroups, so the year establishments entered the QPC survey does not affect whether the average measure of size for respondents differs from the average measure of size for nonrespondents. This pattern is a little different for 2015 Q3, 2015 Q4 and 2016 Q4. For 2015 Q3, the average measures of size are significantly different for establishments entering the QPC survey in 2010, 2014 or 2015, but not for establishments entering the survey in 2011 through 2013. In 2015 Q4, the average measures of size are significantly different for all years entering the QPC survey except for 2014, while differences in average measure of size for respondents and nonrespondents in 2016 Q4 are significant for all years entering the QPC survey except for 2013 and 2014. One other thing to notice when examining the analysis subgroups based on when establishments entered the QPC survey is the significant increase in average measure of size (for both respondents and nonrespondents) for all sample years, starting with 2015 Q3. This occurs because of the new QPC sample selected in 2015 Q1. Establishments selected in the new sample that had been in the old sample, on average, grew over time, especially the establishments selected as births in sample years 2011-2014. Therefore, the average measures of size for these establishments are higher for 2015 Q3, 2015 Q4 and 2016 Q4. On the flip side of this, we see that the average measures of size for plants sampled in 2016 are significantly lower again since they were sampled as births.


Summarizing these results, we see that the differences in average measure of size between respondents and nonrespondents are significant for 50 of the 64 total analysis subgroups we examine in this nonresponse bias study. Looking at these 50 analysis subgroups closer, we see that for 35 of these 50 subgroups, the average measure of size for respondents is larger than the average measure of size for nonrespondents. This is magnified even further looking at the analysis subgroups based on industry priority. Excluding the publishing analysis subgroups, we observe that the difference in average measure of size between respondents and nonrespondents is statistically significant for 14 of the 15 analysis subgroups, while the average measure of size for respondents is larger than the average measure of size for nonrespondents for 13 of these 14 subgroups. These results are expected because our nonresponse follow-up methodology targets larger establishments within each industry category, especially those from higher priority industries and industries with lower initial response. Results observed for the publishing industries subgroups vary from these results primarily because establishments in the publishing industries are relatively smaller and the response rates for the publishing industries are much lower than response rates for all of the other industry priority subgroups.


Further examination of differences between respondents and nonrespondents in the QPC survey shows that there is some level of nonresponse bias for all of the analysis subgroups covered in this nonresponse bias study. We measure the bias present for each analysis subgroup in terms of measure of size (value of receipts) using the following formula:


Bias


where,


Bias : represents the measure of nonresponse bias of the respondent mean for analysis subgroup j

: represents the estimate of the average measure of size for response cases in analysis subgroup j

: represents the estimate of the average measure of size for nonresponse cases in analysis subgroup j

: represents the number of nonrespondents in analysis subgroup j

: represents the total number of active establishments in analysis subgroup j


Appendix C shows these measures of bias for each analysis subgroup, along with the respective average measures of size for respondents and nonrespondents and the corresponding nonresponse rate. There is some level of nonresponse bias inherent in the QPC survey due to our nonresponse follow-up methodology. This becomes apparent when examining the measures of bias for each analysis subgroup, where we see some measure of positive bias (average measure of size for respondents larger than for nonrespondents) for 40 of the 64 analysis subgroups. We also see some larger measures of bias for analysis subgroups in 2015 Q3, 2015 Q4 and 2016 Q4 since the average measures of size are larger for both respondents and nonrespondents, as mentioned earlier. These larger average measures of size result in potentially larger differences between respondents and nonrespondents. Another contributing factor to these bias measures is the relatively low response rates (high nonresponse rates) for the QPC survey, especially for the publishing industries. Since we have high nonresponse rates for all of our analysis subgroups, the m/n components of the respective bias measures are larger, yielding higher overall measures of bias than we would observe if response rates were higher for the QPC survey.



3. Conclusions


This document summarizes different methods used to assess possible nonresponse bias in the QPC survey using the key data variables to define the response criteria. An analysis of response rates in the QPC survey reveals relatively low response rates at the overall survey level. These low response rates are primarily the result of the QPC survey being voluntary. We see overall survey URRs and TQRRS that are just over 50 percent for each of the reference quarters covered by this nonresponse bias study. Further analysis of response rates across the various subgroups examined in this nonresponse bias study generally reveals similar results for both URRs and TQRRs. First, we observe that certainty cases have higher URRs and higher TQRRs than non-certainty cases. We also observe that URRs and TQRRs for high, medium and low priority industries are all much higher than those observed for the publishing industries. URRs for analysis subgroups based on when establishments entered the QPC survey are higher for those in the sample longer, probably indicating some level of conditioning to the survey. However, there is more variation among the respective TQRRs for these subgroups. Over the course of the five reference quarters covered by this nonresponse bias study, we generally see response rates peak in 2015 Q3 and then drop off for both 2015 Q4 and 2016 Q4, primarily because of changes made to the QPC survey. Beginning with 2015 Q4, the QPC survey initiated electronic-only reporting and moved the statement about the QPC survey being voluntary to a more prominent location in the mailout documents. Both of these changes have adversely affected response to the QPC survey.


Subsequent analysis confirms that statistically significant differences exist between the average measure of size (value of receipts obtained from the frame) for respondents and nonrespondents, but these differences are not excessively large from an analytical perspective. At the overall survey level, the average measure of size is larger for respondents than nonrespondents. Across the various analysis subgroups examined in this nonresponse bias study, the average measure of size is also usually larger for respondents than nonrespondents, with some exceptions. Many of the differences that we observe between the average measure of size for respondents and nonrespondents are statistically significant. There is also some level of nonresponse bias present for all of the analysis subgroups covered in this nonresponse bias study, however, much of this bias is inherent in the QPC survey as a result of our nonresponse follow-up methodology, which targets the larger, more influential establishments within each sample industry.


Our analysis of nonresponse bias in the QPC survey focuses on some of the methods for investigating nonresponse bias in business surveys that are presented in Lineback and Thompson (2010), as well as the impact of recent changes to the QPC survey methodology. Future nonresponse bias research for the QPC survey could possibly focus on comparing actual utilization rates within the analysis subgroups because low response rates do not necessarily imply an adverse effect on our utilization rates. Additional research could also focus on the use of imputation for nonresponse. Currently, we do not impute for nonresponse in the QPC survey because the estimates we publish are rates, not level estimates.



Acknowledgements


We would like to thank our reviewers, Amy Newman Smith, Colt Viehdorfer and Susan Bucci, for their assistance and thoughtful suggestions. We also would like to thank our colleagues at the Federal Reserve Board and the Defense Logistics Agency for their continued support of this research project.



References


Bates, N., Griffin, D., Petroni, R., and Treat, J. (2008), “Supporting Document B –

Variables, Rates, and Formulae for Calculating Response Rates and Reporting Requirements: Economic Surveys and Censuses,” Census Bureau guidelines issued 23 Dec. 2008.

Lineback, Joanna Fane, and Katherine J. Thompson (2010), “Conducting Nonresponse Bias Analysis for Business Surveys,” Proceedings of the American Statistical Association Joint Statistical Meetings, 2010, pp. 317-31.





Appendix A


Quarterly Unit Response Rates and Total Quantity Response Rates by Analysis Subgroup


Appendix B


Quarterly Average Measure-of-Size (Receipts)

for Respondents and Nonrespondents by Analysis Subgroup


1Bolded numbers indicate a significant difference between respondents and nonrespondents at the 10% level.

2No testing was done for the Certainty cases since they have weights = 1 and do not contribute to the variance.

Appendix C


Quarterly Average Measure-of-Size (Receipts), Nonresponse Rate, and Nonresponse Bias

for Respondents and Nonrespondents by Analysis Subgroup



1Nonresponse Bias: (Average Measure-of-Size [Respondents] – Average Measure-of-Size [Nonrespondents]) x Nonresponse Rate


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleApproaches to extending or customizing (free) statistical software
AuthorBrandon Shackelford
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy