Supporting Statement (1220-0050) CE Part B - Final

Supporting Statement (1220-0050) CE Part B - Final.docx

Consumer Expenditure Surveys: Quarterly Interview and Diary

OMB: 1220-0050

Document [docx]
Download: docx | pdf

Consumer Expenditure Surveys

OMB Control Number 1220-0050

OMB Expiration Date: March 31, 2027


Supporting Statement For

the Consumer expenditure sureys


OMB Control NO. 1220-0050


B. CollectionS of Information Employing Statistical Methods


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The Consumer Expenditure (CE) Survey is a nationwide household survey conducted by the U.S. Bureau of Labor Statistics to find out how households in the United States spend their money. The CE Survey consists of two sub-surveys: a Quarterly Interview survey (CEQ), and a two-week Diary survey (CED). The Quarterly Interview survey collects detailed information on large expenditures such as property, automobiles, and major appliances, as well as on recurring expenditures such as rent, utilities, and insurance premiums. By contrast, the Diary survey collects detailed information on small, frequently purchased items such as food and apparel. The data from the two surveys are then combined to provide a complete picture of consumer expenditures in the United States.


The data for both surveys are collected from a representative sample of households across the United States. Both surveys have the same sample design, which is a two-stage sample design. In the first stage, a representative sample of counties from across the United States is selected for the survey. In the second stage, a representative sample of households from those counties is selected for the survey. This two-stage sampling process is designed to generate a sample of households in which every geographic area, every demographic group, and every wealth level is well-represented. The rest of this section describes the two sampling processes in more detail.


Sampling Geographic Areas

In the first stage of sampling, all 3,144 counties or county equivalents in the United States are partitioned into 1,492 small geographic clusters called “primary sampling units” (PSUs). These are the “core-based statistical areas” (CBSAs) defined by the Office of Management and Budget (OMB). They range in size from 1 county to 29 counties, with the average size being 2 counties.1 Then a representative sample of 91 of these PSUs is randomly selected for the CEQ and CED surveys. The same 91 PSUs are used in both surveys.


The 91 PSUs fall into three size classes:


PSU

size class”

Number

of PSUs

Description

S

23

Large Metropolitan Core Based Statistical Areas. These are CBSAs with over 2.8 million people, plus Anchorage and Honolulu. They are self-representing PSUs.

N

52

Small Metropolitan Core Based Statistical Areas, and Micropolitan Core Based Statistical Areas. These are CBSAs with under 2.8 million people. They are non-self-representing PSUs.

R

16

Non-Core Based Statistical Areas. These are small clusters of counties in “rural” areas created by CE staff. They are non-self-representing PSUs.


BLS selected its sample of 91 PSUs from a stratified sample design in which all 23 self-representing PSUs (the S PSUs) were selected for the survey with certainty, while all the non-self-representing PSUs (the N and R PSUs) were stratified into 68 (=52+16) strata using four variables to stratify them: income, education, computer ownership, and urbanicity. Then one PSU was randomly selected from each of the N and R strata, with probability proportional to their populations.


All 91 PSUs are used by the CEQ and CED surveys. However, one of CE’s major customers is the Consumer Price Index (CPI), and it uses only the 75 (=23+52) urban PSUs. That is because CPI is an urban survey, not a national survey. CPI uses CE’s data for its expenditure weights.


A New Sample of PSUs in 2025

The CE and CPI programs recently drew a new sample of PSUs that the CE program will start using in 2025. The two programs update their sample of PSUs every ten years to make sure it accurately reflects the latest geographic shifts in the American population. The old sample was based on the 2010 census, and the new sample is based on the 2020 census. The CE program will start using the new sample in 2025 and will continue using it over the ten-year period 2025-2034.


Overall, the new sample design remains unchanged from the old sample design. It still consists of the 23 largest CBSAs in the United States; plus a random sample of 52 smaller CBSAs to represent the rest of the urban parts of the country; and a random sample of 16 non-CBSA areas to represent the rural parts of the country.


However, there are two key differences between the old and new sample designs. One difference is the addition of geographic clustering to the sample design; and the other difference is the longer phase-in/phase-out time period to transition from the old sample of PSUs to the new sample of PSUs. Geographic clustering is being added to facilitate data collection. Small PSUs in the sample (under 200,000 people) are required to be geographically close to large PSUs in the sample (over 200,000 people), so that when data collectors get sick or go on vacation, those from a nearby PSU can fill in for them. And the longer phase-in/phase-out time period is being done to smooth out the costs associated with hiring and training new data collectors over time. The phase-in/phase-out process is normally done all at once, in one year, but this time it is being done gradually over the two-year period 2025-2026. Twenty-four old PSUs are being dropped from the sample, and twenty-four new PSUs are being added to the sample. In 2025, thirteen old PSUs are being dropped from the sample, and eight new PSUs are being added to the sample. And in 2026, eleven old PSUs are being dropped from the sample, and sixteen new PSUs are being added to the sample.


Number of PSUs Being Dropped from the Old Sample Design

And Added to the New Sample Design


2025

2026

Overall

Starting number of PSUs

91

86

91

Number of old PSUs being dropped

13

11

24

Number of new PSUs being added

+8

+16

+24

Ending number of PSUs

86

91

91


The original plan was to drop eight old PSUs per year, and add eight new PSUs per year over the three-year period 2025-2027. However, the plan changed due to a temporary budget situation.



Sampling Households Within PSUs

After selecting a sample of PSUs, a sample of households is selected from the civilian non-institutional portion of those PSUs. That includes people living in houses, condominiums, and apartments, as well as people living in group quarters, such as college dormitories and boarding houses. However, it excludes the non-civilian and institutional portions of the population, such as military personnel living on base, nursing home residents, and prison inmates.


Addresses for the CEQ and CED surveys are selected from two sampling frames maintained by the Census Bureau: the Unit frame, and the Group Quarters (GQ) frame. Both frames are derived from the Master Address File (MAF), which is basically the Census Bureau’s list of all residential addresses in the United States. The MAF is updated twice per year with information from the U.S. Postal Service. The Unit frame is the larger of the two frames. It has 99% of the MAF’s civilian non-institutional addresses, and it is updated twice per year. The GQ frame is the smaller of the two frames. It has the remaining 1% of the MAF’s civilian non-institutional addresses, and it is updated every three years.


In each PSU, a “systematic sample” of addresses is selected from the two frames. It is done by sorting the addresses on the frames by a list of sort variables, and then systematically selecting every k-th address down the list, where “k” is the sampling interval. The sampling interval “k” is the number of addresses in the PSU divided by the number of addresses in the PSU that are to be selected for the sample. The first address in the sample is randomly selected from the first k addresses on the sorted list. Then, after that, every k-th address down the list is systematically selected. For example, if the first address selected is the 7-th address on the list, then the sample will consist of addresses 7, 7+k, 7+2k, 7+3k, and so on.


For the Unit frame, the addresses are sorted first by PSU; then by their State Federal Information and Processing Standards (FIPS) code; then by their County FIPS code; then by a CE stratification variable described below; and then by their Census Tract code, Census Block code, Street name, Street number, and MAFID code.


For the Unit frame, the CE stratification variable is created from other variables that are correlated with household expenditures. The purpose of the variable is to ensure that households of every wealth level are well-represented in the sample. Table 1 below shows how the households are sorted. It has codes ranging from 10 to 99 with the lower codes being for low-wealth households, and the higher codes being for high-wealth households. This sorting or “stratification” variable is created from the number of occupants in each household, their housing tenure (owner/renter), and the market value of their home (for owners) or the rental value of their apartment or home (for renters). These variables are correlated with expenditures. Households with more people tend to be wealthier than those with fewer people; homeowners tend to be wealthier than renters; and people living in high-price housing units tend to be wealthier than people living in low-price housing units.


Table 1. CE Unit Frame Stratification Code Values


Renter/Owner Quartile

Number of Occupants


1 person

2 persons

Vacant

3 persons

4+ persons

Renters 1st Quartile

10

11

12

13

14

Renters 2nd Quartile

25

24

23

22

21

Renters 3rd Quartile

30

31

32

33

34

Renters 4th Quartile

45

44

43

42

41

Owners 1st Quartile

50

51

52

53

54

Owners 2nd Quartile

65

64

63

62

61

Owners 3rd Quartile

70

71

72

73

74

Owners 4th Quartile

85

84

83

82

81

Other



99




All the renters are at one end of the stratification and all the owners are at the other end of the stratification. The renters and owners are further subdivided into quartiles based on monthly rental and property values to ensure that households of every wealth level are well represented in the survey. Vacant housing units are put in the middle column for the number of household occupants because although they were vacant at the time of the decennial census, when CE’s field representatives visit them, most of them will be occupied, and they could be in any of the four non-zero categories. Therefore, the middle column is their “expected” location.


For the GQ frame, the addresses are sorted first by PSU; then by their State FIPS code; then by their County FIPS code; then by their Census Tract code; then by CHPCT (the percent of people in the tract living in college housing), and Census Block code. CHPCT is used because people living in college housing units are different than the rest of the people in the GQ frame, so using it as a stratification variable helps produce a more representative sample.


The Unit and GQ frames use different sorting variables, but they use the same sampling interval.


For more information on the sample design in general, please see the paper by Susan King on “Selecting a Sample of Households for the Consumer Expenditure Survey” (Attachment S); or the paper by Danielle Neiman et. al., “Review of the 2010 Sample Redesign of the Consumer Expenditure Survey” (Attachment T). For more information on the geographic portion of CE’s sample design, please see the memorandum from Adam Safir to Jennifer Epps on “CE sample redesign PSU Memo for Census.,” July 21, 2023 (Attachment U).


Consumer Units

A consumer unit (CU) is the unit from which the CE seeks to collect its detailed expenditure information. A CU is basically the same thing as a “household,” so the terms are often used interchangeably. However, there is a technical difference between them. A household is a group of people who live together in a housing unit. They are usually related to each other by blood, marriage, adoption, or some other legal arrangement, but the key point is that they live together. By contrast, a CU is a group of people who live together in a housing unit, and who pool their incomes to make joint expenditure decisions. Thus, the difference between households and CUs is the financial relationship between the people in the housing unit. The people in a CU are financially interdependent on each other, while the financial relationship between them is irrelevant to the definition of a household. Approximately 99 percent of all occupied housing units have one CU, so the terms “households” and “CUs” are often used interchangeably. 2


There are approximately 135 million CUs in the United States. The following table shows the estimated number of CUs in the 91 strata from which CE’s sample of 91 PSUs was selected.3 The stratum code is a 4-character variable where the first character is the stratum’s size class (S/N/R). The second character is the stratum’s region of the country (1=Northeast, 2=Midwest, 3=South, 4=West). The third character is the stratum’s division of the country (1=New England, 2=Middle Atlantic, 3=East North Central, etc.). And the fourth character is a unique identifier of the strata within their size/region/division classes. The table below shows the approximate number of CUs in each of the 91 strata.



Estimated Number of CUs in CE’s 91 Strata


Stratum Code

Estimated Number of CUs in the Stratum

S11A

2,012,737

S12A

8,203,256

S12B

2,543,623

S23A

3,917,636

S23B

1,788,888

S24A

1,503,051

S24B

1,148,695

S35A

2,600,690

S35B

2,500,156

S35C

2,480,395

S35D

1,293,296

S35E

1,158,575

S37A

3,110,724

S37B

2,900,904

S48A

1,973,718

S48B

1,207,171

S49A

5,376,795

S49B

1,934,281

S49C

1,873,524

S49D

1,636,850

S49E

1,343,541

S49F

592,702

S49G

220,019

N11B

1,660,062

N11C

1,653,386

N11E

561,549

N12C

1,987,224

N12D

1,362,127

N12E

925,025

N12F

1,072,781

N12H

1,133,614

N23C

1,548,964

N23D

1,614,525

N23F

1,189,177

N23G

1,614,525

N23I

2,345,396

N23J

1,014,371

N23K

1,628,623

N23L

1,016,143

N24C

1,467,510

N24D

1,467,510

N24E

731,857

N24F

1,606,889

N35F

1,620,446

N35G

1,322,813

N35H

962,647

N35I

1,534,257

N35J

1,534,257

N35K

1,534,257

N35L

1,216,106

N35N

748,337

N35O

1,534,257

N35P

819,742

N35Q

2,182,021

N35S

392,941

N36A

1,579,995

N36B

1,809,475

N36C

1,229,066

N36E

2,232,359

N37C

1,750,496

N37D

2,171,520

N37F

936,578

N37G

938,766

N37H

1,473,230

N37I

1,042,312

N37J

1,171,770

N48C

1,860,080

N48D

1,073,282

N48E

1,445,767

N48F

2,066,728

N49H

1,842,096

N49I

1,664,017

N49J

1,512,464

N49K

1,032,925

N49L

2,523,186

R11A

269,129

R12G

309,451

R23B

610,926

R23K

610,925

R24B

438,690

R24G

438,690

R24H

438,690

R35A

426,932

R35B

426,932

R35S

426,932

R36G

504,005

R36H

504,005

R37A

541,933

R37B

541,933

R48H

521,616

R49A

308,535

Total

135,000,000



These are the “steady state” number of CUs by stratum that will be in effect in 2026 and 2027. The number of CUs by stratum will be a little different in 2025 due to a temporary budget situation causing five of the PSUs (N23G, N24C, N35J, N35K, N35O) to be dropped from the sample one year early.


Sample Size and Response Rates

The table below shows the expected annual sample sizes and response rates for the CEQ and CED surveys for 2025-2027. The sample sizes in 2026 and 2027 are “steady state” sample sizes, while the sample sizes for 2025 are 5.7% lower due to a temporary sample cut caused by a temporary budget situation.



Quarterly Interview Survey

Diary Survey

Category

2025

2026

2027

2025

2026

2027

Total Sample Size (addresses)

49,700

52,700

52,700

16,800

17,800

17,800








Type B and C Noninterviews (vacant, demolished, etc.)







Number

7,450

7,900

7,900

2,500

2,650

2,650

Percent of Total Sample

15.0

15.0

15.0

15.0

15.0

15.0








Eligible Units (occupied housing units)







Number

42,250

44,800

44,800

14,300

15,150

15,150

Percent of Total Sample

85.0

85.0

85.0

85.0

85.0

85.0








Type A Noninterviews







Number

25,350

26,900

26,900

8,600

9,100

9,100

Percent of Eligible Units

60.0

60.0

60.0

60.0

60.0

60.0








Completed Interviews







Number

16,900

17,900

17,900

5,700

6,050

6,050

Percent of Eligible Units (Response Rate)

40.0

40.0

40.0

40.0

40.0

40.0


Each year the CEQ’s sample will have approximately 52,700 addresses. Of those addresses, 85% are expected to be occupied housing units, and the other 15% are expected to be “Type B/C” noninterviews, which are addresses that are not occupied housing units (they are nonexistent, nonresidential, vacant, demolished, etc.). Of the occupied housing units, 40% are expected to complete an interview, and the other 60% are expected to be “Type A” noninterviews, which are occupied housing units that do not participate in the survey. This is expected to yield approximately 17,900 completed interviews per year.


Similarly, each year the CED’s sample will have approximately 17,800 addresses. Of those addresses, 85% are expected to be occupied housing units, and the other 15% are expected to be “Type B/C” noninterviews. Of the occupied housing units, 40% are expected to complete their diaries, and the other 60% are expected to be “Type A” noninterviews. This is expected to yield approximately 12,100 (= 6,050 × 2) weekly diaries per year.


Again, these are the “steady state” sample sizes that will be in effect in 2026 and 2027. The sample sizes in 2025 will be 5.7% lower due to a temporary sample cut caused by a temporary budget situation.


Nonresponse Bias

In 2022 CE staff completed a nonresponse bias study to determine whether the CEQ and CED surveys’ nonrespondents were “missing completely at random” (MCAR), and whether their missing-ness generated any bias in the published expenditure estimates over the ten-year period 2010-2019. The study was undertaken in response to an OMB directive, and it concluded that the nonrespondents were not MCAR, and the amount of bias they generated was small.


The MCAR part of the study had four sub-studies. They found different demographic groups had different response rates; respondents had different demographic characteristics than the American population as a whole; respondents’ demographic characteristics changed over time; and a mathematical model predicting response rates had parameters on many of its demographic variables that were statistically significant. Overall, all four sub-studies indicated that CE’s nonrespondents were not MCAR. The most significant finding within the four sub-studies was that high-income households had lower response rates than low-income households, which is a concern because CE is an economic survey that focuses on expenditures, and income is correlated with expenditures.


The bias part of the study also had four sub-studies. They examined four different nonresponse weighting adjustment procedures to get an idea of the range of possible values that the “correct” nonresponse-adjusted expenditure estimates might have. All four procedures increased the CEQ’s expenditure estimates by about one percent from its base-weighted (i.e., unadjusted) values, and all four procedures decreased the CED’s expenditure estimates by about one percent from its base-weighted values. Thus, in both surveys CE’s expenditure estimates would have been biased by about one percent if the nonresponse weighting adjustment procedure had been ignored. The consistency of all four sub-studies on nonresponse bias estimates within each survey suggests that the results are robust.


So, overall, the study showed that CE’s nonresponse weighting adjustment procedure is working well. The nonrespondents are not MCAR, but the amount of bias they generate is small, and the nonresponse weighting adjustment procedure is doing a good job compensating for the bias. The study provided a counterexample to the commonly held belief that if a survey’s data are not missing completely at random, then its estimates are subject to nonresponse bias.


For more information on the calculation of response rates, see the memorandum from Sharon Krieger to David Swanson on “2022 Response Rates for the Interview and Diary Surveys” (Attachment V). For more information on the nonresponse bias studies, see “A Nonresponse Bias Study of the Consumer Expenditure Survey for the Ten-Year Period 2010-2019” (Attachment W).




2. Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection;

  • Estimation procedure;

  • Degree of accuracy needed for the purpose described in the justification;

  • Unusual problems requiring specialized sampling procedures; and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Field representatives (FRs) from the U.S. Census Bureau, under contract with BLS, collect data from CE’s sample households both in-person and by telephone. Historically, the preference has been to collect data in-person, but during the COVID pandemic interviewing by telephone became the primary way of collecting data. The reason was to safeguard the health of CE’s field representatives and the people in the sample households, and to prevent the spread of the COVID virus. In 2021 approximately 30% of CEQ’s interviews were conducted in-person and 70% were conducted by telephone; and for CED approximately 60% of the interviews were conducted in-person and 40% were conducted by telephone. This practice will continue for the foreseeable future. See Attachment H - CED Advanced Letter Procedures and Diary Email Template for additional information on modifications resulting from COVID.


FRs visit or phone each household in the CEQ’s sample every 3 months for 4 consecutive quarters to collect information on the expenditures the households made during the previous 3 months. After participating in the survey for 4 quarters, the household is dropped from the survey and replaced by another household. The households in the CEQ survey are on a rotating schedule with approximately one-fourth of the households in the sample being new to the survey each quarter.


Prior to the first visit, the sample households are sent an advanced letter informing them that they have been selected for the survey and asking for their cooperation. For subsequent visits in the CEQ survey, the households are sent an advanced letter reminding them that it has been 3 months since they last participated in the survey and asking for their cooperation again. Field representatives enter the household’s responses into a laptop computer.


For the CED survey, field representatives visit or telephone each household in the sample two times to collect information on the expenditures they make during a 2-week period.


On the first visit in the CED survey, the field representatives introduce themselves, explain the survey, and help the households choose between filling out the diaries on paper or online. Households choosing to fill out the diaries on paper are given two weekly diary forms, one for each week of the survey period, while households choosing to fill out the diaries online are given an electronic link to the diary and an Online Diary User Guide. Households are asked to record all the expenditures they make over the 2-week survey period. For the households filling out the diaries on paper, the field representatives make a second visit to pick up the completed diaries, and thank them for participating in the survey. All the households are dropped from the survey after their 2-week period and replaced by other households.


During the COVID pandemic, procedures were modified to allow field representatives to contact households by telephone in lieu of personal visits. Whichever way the households are initially contacted, the field representatives give them three options for filling out the diaries: mailing them a diary form that allows them to fill it out by hand; emailing them a link to a diary form that allows them to fill it out online; or calling them on the telephone and having them report their expenditures orally.


After completing the second week of the CED survey and the fourth quarter of the CEQ survey, the households are sent a Thank You letter and a certificate of appreciation for their participation in the survey.


Estimation

The primary statistic calculated by the CE survey is the average annual expenditure per consumer unit. It is a weighted average whose calculation follows well-established statistical principles. The final weight for each sample CU is the product of its base weight (which is the inverse of the CU’s probability of selection); a nonresponse adjustment factor (to account for noninterviews); and a calibration adjustment factor (to post-stratify the weights to account for population undercoverage). A typical base weight for a CU in the CEQ is approximately 10,000, which means it represents 10,000 CUs – itself plus 9,999 other CUs that were not selected for the survey. A typical final weight is approximately 30,000, which means it represents 30,000 CUs in the population – itself plus 29,999 other CUs that were not selected for the survey and/or did not participate in the survey.


For additional information on CE’s sample design and estimation methodology, please see “Chapter 16, Consumer Expenditures and Income” in the BLS Handbook of Methods (Attachment X); see the memorandum from Adam Safir to Jennifer Epps on “CE sample redesign PSU Memo for Census.,” July 21, 2023 (Attachment U); and Lauren Vermeer and Sharon Krieger’s memo on “Response Rate Computations for the Consumer Expenditure Survey” (Attachment Y).



3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


Keeping the CEQ’s and CED’s response rates as high as possible requires special efforts, particularly from the Census Bureau’s field staff. The field staff are trained in a variety of techniques designed to persuade people to participate in the survey, such as “refusal conversion” techniques which are designed to change the minds of people who are hesitant to participate in the survey. If someone continues to refuse to participate in the survey, the field office sends a letter trying to persuade them to participate in the survey and a senior interviewer or supervisory field representative is assigned to the case for more refusal conversion efforts. Of course, refusal conversion efforts take time and cost money, so regional office staff try to decide which cases to work on and how much effort to put into them based on cost-effectiveness considerations.


Special computer processing techniques are also used in the CEQ to reduce respondent burden, which in turn helps keep response rates up. For example, some data collected in one interview are carried forward to subsequent interviews, such as data on household members and their personal characteristics, along with data on their properties, mortgages, vehicles, and insurance policies. Minimizing respondent burden, including interview length, are important factors in the effort to keep response rates up.


When field staff still cannot convert noninterviews to interviews, the estimation process has a noninterview adjustment to account for them. As mentioned above, every CU in the sample has a base weight equal to the number of CUs in the population it represents. In this process the respondent CUs have their weights increased to account for the nonrespondent CUs. The total sample of CUs (both respondents and nonrespondents) is partitioned into 192 subsets based on their region, CU size, income, and number of contact attempts.4 Then within each subset the base weights of the respondents are increased by multiplying them by a factor equal to the sum of the base weights for all CUs (both respondents and nonrespondents) divided by the sum of the base weights from just the respondent CUs. This makes the final weights of the respondents add up to the total number of CUs in the population.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


CE plans to continue testing in support of the streamlining of the questionnaire including cognitive testing of the following expenditure sections: Trips, Hobbies, sports and recreation, Computers and Electronics, Household appliances and furnishings, Non-health insurance, Professional services, Events, Contributions, and Vehicles. Testing will determine cognitive understanding of various question wording options and to ensure collection of only those items requiring collection.


Additionally, CE plans to perform the following tests if funding is available and will submit an Information Collection Request (ICR), as needed:


Records Path and Incentives Feasibility Test ~ 2027

The purpose of this project is to develop and test the proposed records path for select sections in the CE Interview survey in addition to testing protocols for providing a targeted, promised incentive for respondents to use specific records during their interview. The results of this test will attempt to determine the feasibility and impact of the proposed records path and associated incentives protocol in the CE survey. The test results will improve BLS’ understanding of the operational issues underlying the implementation of a records path with targeted incentives, including respondent and interviewer reactions, impact on interview time, and associated data quality.



Conduct a Diary Performance-Based Incentives Field Test ~ 2027

The purpose of this project is to develop and test field performance-based incentives in the diary survey with a focus of improving response, engagement, and quality. Previous results have shown that performance-based incentives are effective in increasing sample in the Interview survey, albeit moderately. However, no such test has been done of performance-based incentives in an independent diary. Further, previous results have raised some concerns that performance-based incentives may introduce bias (though the finding was not significant) and this should be evaluated in the context of the diary survey.




5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The sample design is a joint effort between BLS and the Census Bureau, with the two bureaus focusing on different aspects of the sample design. BLS focuses on the PSUs, and the Census Bureau focuses on the households. For more information on the sample design or the data collection effort, you may contact the following individuals.


Sample Design:

David Swanson (BLS)

James Farber (Census)

(202) 691-6917

(301) 763-1844

Data Collection:

Janel Brattland (BLS)

Jennifer Epps (Census)

(202) 691-5427

(301) 763-5342


1 The average size of a PSU in the United States is 2 counties, but the average size of a PSU in the CEQ/CED sample is 5 counties. That is because the CEQ/CED surveys select PSUs with probability proportional to their populations. That means larger PSUs are more likely to be selected than smaller PSUs.

2 A person living alone, or a group of unrelated people sharing a housing unit, is also considered to be a household. Unrelated people who share a housing unit are considered to be separate CUs if they are responsible for paying their own expenses in at least two of these three categories: food, shelter, and all other expenses. Likewise college students living away from home are considered to be separate CUs from their parents if they are responsible for paying their own expenses in at least two of these three categories.

3 The number of CUs comes from combining information about the total number of housing units in the Census Bureau’s sampling frames (the MAF) with observations made by CE’s field representatives about the number of CUs living in those housing units. CE’s observations in the field show the average number of CUs per occupied housing unit is approximately 1.015. For every 1,000 occupied housing units there are approximately 1,015 CUs. The number of CUs per stratum shown in the table below comes from allocating the nationwide total of 135 million CUs by the number of people living in each stratum according to the 2020 census.

4 There are 4 regions of the country, 4 CU size classes, 3 income classes, and 4 contact attempt classes, making 192 = 4 x 4 x 3 x 4 subsets into which the sample is partitioned. For nonrespondents the number of people in the CU is obtained from data collected in previous interviews or from talking to their neighbors. For all CUs (both respondents and nonrespondents) their income is estimated from a publicly available database from the IRS which has the average household income by zipcode. In the nonresponse adjustment process every CU is assumed to have its zipcode’s average income value.

4


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleChanges in section A
AuthorFRIEDLANDER_M
File Modified0000-00-00
File Created2024-09-19

© 2024 OMB.report | Privacy Policy