OMB Information Collection Request
Supporting Statement B
U.S. Department of Commerce
U.S. Census Bureau
High Frequency Surveys Program
Household Trends and Outlook Pulse Survey
OMB Control Number 0607-1029
B. Collections of Information Employing Statistical Methods
The topical monthly sample size is currently 17,812 housing units after conducting sample replenishment in March. Previous topicals yielded, on average, a response rate of approximately 55%. Subsequent topicals are expected to have a similar response rate, resulting in approximately 10,000 households responding to the January and February surveys.
In January 2025, a sample replenishment will be conducted. Approximately 110,000 households will be invited to join the HTOPS panel. Of those households, we expect a 17 percent response rate, resulting in 18,800 responses to the baseline questionnaire. We expect a 55 percent response rate for the monthly topical surveys resulting in10,350 additional monthly topical responses. The new Household Trends and Outlook Pulse Survey (HTOPS) panel sample size is estimated to reach 36,600, with over 20,000 monthly responses beginning with the April 2025 topical collection.
Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
The sample design is a stratified systematic sample of all eligible HUs from the Census Bureau’s Master Address File (MAF), which covers all 50 states and the District of Columbia. Auxiliary data from the Demographic Frame (DF) and Planning Data Base (PDB) will be linked to the MAF to stratify the housing units into stratum based on demographic variables within the nine Census Bureau divisions. MAF records not stratified into a stratum based on the DF or PDB will be defined as their own strata. The sample will be distributed proportionately within divisions of the country to each stratum based on the number of housing units within the stratum. We will conduct a subsampling operation in stratum that, based on results of other demographic surveys, have higher response rates. Thus, the stratum where no subsampling occurs will be oversampled.
Future refreshment samples will be drawn from a frame that uses updated MAF, DF and PDB information, and those samples may be targeted at the geographic or domain level, to maintain representativeness of the Household Panel Survey, adjust sample sizes based on observed nonresponse, and account for sample units that are rotating out of the panel.
The final HTOPS survey weights are designed to produce estimates for the total persons aged 18 and older living within HUs (based on the person weight); and occupied household level estimates (based on the household weight). We will create these weights by adjusting the household-level sampling base weights by various factors to account for nonresponse, adults per household, and coverage. The final HTOPS survey weights are created by applying a Housing Unit adjustment, which converts the person level weight back into a housing unit (HU) weight by dividing the person level weight by the number of persons age 18 and older that were reported to live within the household, and the Occupied HU ratio adjustment, which ensures that the final Household Panel Survey weights will sum to the American Community Survey (ACS) one-year, state-level estimates of occupied HUs.
Enrolled HTOPS participants are invited to respond to monthly topical surveys. Invitations will be sent by email, text message (opt-in), and for those panelists with no email or mobile phone contact information, outbound telephone calling. Using a unique login, panelists can access a topical questionnaire by computer, tablet, or smartphone. Phone-only panelists will complete topical surveys via inbound or outbound CATI.
Data collection for each topical survey will take place in a 2-week window. Each topical survey will be approximately 20 minutes long and panelists will receive up to two reminders to complete a topical survey. Panelists who complete a topical survey will be mailed a thank you letter with a $5incentive (either digital or cash) about 2 to 6 weeks after the topical survey field period closes (depending on the type of incentive).
Future topical surveys can be sponsored by other Census Bureau survey programs. Each topical survey will offer panelists an opportunity to update contact information and verify their address for incentive mailing. Content governance will initially follow policies developed for the Household Pulse Survey and be amended as necessary.
Keeping panelists engaged will prevent attrition and maintain the representativeness of the panel. We will continue sending panelists one topical survey per month to keep them engaged. Panelists will not be eligible for more than one survey per month to keep burden low and reduce panel conditioning. Topical surveys may target specific groups of panelists depending on the topical survey sponsor. If panelists are not sampled for a particular month’s topical survey, they will be asked to respond to a pre-designed panel maintenance questionnaire that will also serve to verify demographic information and record any changes.
HTOPS panel members will be asked to complete approximately one questionnaire per month and will receive an incentive for each questionnaire. Panelists will be enrolled for three years and drop off after that period. In addition to this three-year limit, we expect attrition due to inactivity and requests to disenroll. Attrition can bias the panel estimates, making the development of a panel member replenishment plan of vital importance (Herzing & Blom, 2019; Lugtig et al., 2014; Schifeling et al., 2015; Toepoela & Schonlau, 2017).
Panelist requests to disenroll from the panel will be identified and processed according to forthcoming protocols. Periodic nonresponse or refusal to the monthly requests for otherwise active panelists is expected. The definition of an inactive panelist is as follows:
No response or active refusal to:
a survey request for three consecutive months; or
more than 50% of survey requests within a 12-month period.
A particular questionnaire may be classified as “no response” due to unit nonresponse (i.e., no questionnaire initiation), item nonresponse resulting in an interview that is not usable for analyses (e.g., item nonresponse to questions deemed critical for analysis, high item nonresponse alone or after data review), and poor-quality data resulting in an unusable interview. Inactive panelists will remain members of the HTOPS panel if reengagement is desired by Census staff, especially for rare or historically undercounted populations. Definition of poor-quality responses is forthcoming.
We will assess on an ongoing basis (and no less than quarterly) the generalizability of the panel estimates to represent the target population. Evaluative methods will include precision within important demographic and geographic characteristics, R-indicators, propensity scores, and nonresponse bias analyses (Bianchi & Biffignandi, 2017; Eckman et al., 2021; Groves & Peytcheva, 2008; Peytcheva & Groves, 2009; Rosen et al., 2014).
Based on results from multiple analyses, we will identify any subgroups requiring replenishment. New members will be sampled and recruited using the same protocol as for initial enrollment.
Because incentives remain one of the most effective ways to encourage survey participation. The current incentive design includes the following:
Initial Invitation: $2 visible prepaid incentive with the initial invitation to complete the screener.
Baseline Questionnaire: $10 baseline contingent incentive after initial recruitment field period.
Topical Surveys: $5 for each topical survey (~20-minute average; once per month).
Respondents will be emailed digital incentives or mailed cash incentives (if unable or unwilling to accept digital incentives) for survey completion. The National Processing Center (NPC) and the Associate Director for Demographic Programs – Survey Operations (ADDP-SO) team will coordinate incentive distribution. The incentive structure could be amended to facilitate ongoing engagement of panelists, particularly for groups of panelists that are rare or historically undercounted.
January Topical Experiments
Test A: Efficacy of Income and Program Screeners:
The current SIPP instrument includes income and program screener questions to streamline the instrument experience for respondents based on eligibility for means-tested programs. This test will evaluate the necessity of the annual screener and how a combination of income and program screeners perform in a 6 month setting, as opposed to the “prior calendar year” reference period used in the current SIPP.
Test B: Including Subtype Examples When Asking About Program Receipt:
For many of the programs for which we have historically collected subtypes, we plan to no longer collect detailed information about each subtype in SIPP SEAMLESS (i.e., we will no longer collect unique monthly receipt for each subtype or monthly amounts for each subtype). Because of this, there is no longer an explicit need to ask the question about which specific types of a benefit somebody received. However, we want to understand if these subtypes can help inform the "any" question by acting as examples for the respondent. This may be especially useful in a self-administered mode where an FR is not available to add clarity. This test evaluates the inclusion of subtypes as examples as part of the "any" question to see whether reports of program receipt vary when presented the examples versus not.
Test C: Including a “NOW” Question: In the current SIPP instrument, for many programs, respondents are asked two screener questions: 1) Are you currently receiving [program]?; and 2) Did you receive [program] at any time since [month 1 of the reference period]? Feedback from field representatives suggests that some respondents get frustrated by the repetitive nature of being asked two similar questions. Asking the respondent about their current program receipt may not be necessary because we do not edit or release this information. However, others argue that it is important to orient the respondent by asking about their current receipt before asking about prior receipt. This test evaluates the importance of including the NOW question prior to the ANY question.
Test D: Including a “Same Amounts at All Months” Option:
As SIPP transitions to internet self-response and a 6-month reference period, we need to modify how we capture program receipt and benefit amounts. We know from comparisons to administrative data that respondents do not accurately report changes to their benefit amounts. Respondents need to be able to efficiently report the value of their benefit in each month of the reference period. The current proposal for SEAMLESS is to provide a text box for respondents to enter their benefit amount next to each month of the reference period for which they received the benefit. However, this approach could be burdensome for respondents who receive multiple programs. Given this concern, this test evaluates the viability of including a "same amount in all months" question as part of the sequence. Additionally, it assesses whether this mechanism for reducing respondent burden leads to increased incidence of straightlining (i.e. reduced reporting of variation in monthly amounts received).
Test E: Collecting Gross vs. Net Social Security Amounts:
We have ascertained from FR focus groups that respondents often do not know or struggle to provide the gross value of their Social Security check (i.e., the amount prior to any deductions). Similarly, they often do not know the value of their deductions for Medicare premiums. However, for calculating total income, we need to calculate the gross income. While we can do this in part by imputing the value of Medicare deductions and adding this to the net check to produce a gross amount, this is not preferable. This test will explore the feasibility of giving respondents the option of reporting their gross amount, if they are able to, while offering providing the net amount as a fallback option.
Sample Replenishment/Baseline Experiments
Age - In order to understand how age reporting for infants is affected by the availability to report age in months, we will conduct an experiment. Half of respondents will be randomized to an experimental condition where they are able to indicate age in months for other household members, while the other half will see the control version of the question which only asks for age in years. This experiment is meant to support research on the undercount of young children.
Select all instruction - In order to determine whether the instruction "Select all that apply" is necessary in web instruments, we will conduct an experiment. Half of respondents will be randomized to an experimental condition where they will NOT see an instruction to select all that apply for some of the multiple-select items in the survey, while the other half will see the control version of the question that includes the instruction. This experiment is meant to support research on survey web standards.
Grid presentation - In order to determine whether grids are a successful way of presenting a series of questions in web instruments being completed on a larger screen, we will conduct an experiment. Half of respondents completing the questionnaire on a larger screen (that is, not a mobile device) will be randomized to an experimental condition where they will see some of the grid questions in the survey presented broken out item by item. The other half of non-mobile respondents will see the control version of the questions presented as a standard grid. Note: By default, Qualtrics presents grid in the item by item format on mobile devices already. This experiment is meant to support research on survey web standards.
Statistical Design:
Anthony Tersine
Demographic Statistical Methods Division
Demographic Programs Directorate
Anthony.g.tersine.jr@census.gov
Data Collection/Survey Design:
Jason Fields
Social Economic and Housing Statistics Division
Demographic Programs Directorate
jason.m.fields@census.gov
Jennifer Hunter Childs
Demographic Programs Directorate
jennifer.hunter.childs@census.gov
Statistical Analysis:
David Waddington
Social Economic and Housing Statistics Division
Demographic Programs Directorate
david.g.waddington@census.gov
Page
|
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | High Frequency Surveys HTOPS January, February, Replenishment Supporting Statement B |
Author | Mary Reuling Lenaiyasa (CENSUS/PCO FED) |
File Modified | 0000-00-00 |
File Created | 2025-02-20 |