51321 Econ Risk OMB SSB_5-18-22

51321 Econ Risk OMB SSB_5-18-22.docx

Understanding Economic Risk for Families with Low Incomes: Economic Security, Program Benefits, and Decisions About Work

OMB: 0990-0487

Document [docx]
Download: docx | pdf



Understanding Economic Risk for Families with Low Income: Economic Security, Program Benefits, and Decisions About Work




OMB Information Collection Request

XXXX-XXXX



Supporting Statement

Part B





May 2022









Submitted by:

The Office of the Assistant Secretary of Planning and Evaluation

U.S. Department of Health and Human Services



200 Independent Avenue, SW

Washington, DC 20201



Project officers: Amanda Benton, Nina Chien, Suzanne Macartney



Executive Summary

Increasing the employment earnings of individuals is central to federal poverty reduction policy for working-age adults. Most social services aim to serve as temporary supports for adults who will eventually return to the workforce. A previous Office of the Assistant Secretary for Planning and Evaluation (ASPE) study conducted with working parents who received one or more government benefits found that, when faced with a decision about increasing earnings, these parents reported considering multiple complex (and at times competing) factors (Chien et al., 2021). In addition to the more obvious consideration of benefit reductions, working parents also considered at least two dimensions of economic risk when deciding whether to increase earnings: (1) the risk of an earnings reduction (for example, from job loss) at a later time; and (2) in the case of an earnings reduction, the risk of being unable to regain benefits lost due to the original earnings increase.

Little is known about whether or how the concerns raised by working parents might inform choice. The study will recruit benefits recipients, present them with descriptions of different job opportunities and evaluate their stated choices to accept these opportunities or not. The results of the study will be used to better understand how these considerations affect decision making alone or in concert, their relative influence on decision making, and whether their impacts vary across subgroups. The study will provide evidence about which policy levers are most likely to influence behavior. The research team will share these findings with federal agencies that administer benefits programs and will use them to inform decisions about how to encourage increases in employment earnings.

B.1. Respondent universe, sampling methods, and response rate

The Understanding Economic Risk for Families with Low Income study will engage individuals who participate in certain federal benefits programs (Medicaid, Supplemental Nutrition Assistance Program [SNAP], and Child Care Development Fund [CCDF] child care subsidies). The purpose of this study is to better understand how benefit recipients decide whether to pursue opportunities to increase their earnings. Specifically, this study seeks to better understand how benefits cliffs and uncertainty about job prospects might affect people’s employment decisions.

a. Selecting benefits programs

ASPE and Mathematica considered several benefits programs as potential candidates for a test of economic decision making and decided to study Medicaid, SNAP, and CCDF child care subsidies.

b. Intended population

The analytic goals of this study will determine the intended population. The study team plans to conduct analyses with adults 18 years or older who are beneficiaries of one or more of the three previously mentioned benefits programs (CCDF, Medicaid and CHIP, and/or SNAP). These programs provide services to many beneficiaries and contain benefits cliffs. When selecting benefits programs, ASPE and Mathematica also considered how beneficiaries might perceive different programs. For example, subsidized child care is limited by the size of CCDF block grants and can have long waiting lists, whereas SNAP and Medicaid do not. Medicaid also differs from SNAP and CCDF because coverage can be applied retroactively, preventing beneficiaries who must reenroll from losing the benefit while waiting for their application to be processed.

Using panel administrative records, the survey will initially seek to engage people who have combinations of household size and income (defined as less than 200 percent of the federal poverty level) that make them potentially eligible to receive SNAP or Medicaid. Before completing the study, further screening will verify participants are benefits recipients. For the sake of sampling efficiency, our sampling method omits very recent beneficiaries who currently earn less than indicated in their updated panel records.

c. Sampling

NORC, a nonpartisan and objective research organization at the University of Chicago, will perform the sampling and distribution of the survey. The sample will consist of a mixture of participants from NORC’s AmeriSpeak panel, a probability sample, and (if necessary) a nonprobability sample drawn from one or more market research panels. NORC will select the nonprobability sample as a convenience sample. NORC will screen people from both sources for study eligibility. The data file will indicate the source for each individual participant.

NORC’s AmeriSpeak panel

The AmeriSpeak panel is a population-based sample maintained by NORC. NORC collects basic demographic information, such as age, gender, and race and ethnicity for all AmeriSpeak panel members at recruitment, which allows for sampling of specific populations in studies. The study team expects the American Association of Public Opinion Research (AAPOR) response rate (RR3) for AmeriSpeak participants recruited into this study will be about 15 percent. This response rate considers panel recruitment rate (34 percent based on the 2014–2018 AAPOR RR3 weighted rate), panel attrition (15 percent annually), and the historic average survey participation rate (see Attachment B-1).

Exhibit 1.1. Sampling universe, sampling frame, and survey sample size

Group

Size

Respondent universea

58,000,000

Panel sample

48,000

Number of likely eligible panel members

9,600b

Survey invitations issued

3,000

Survey competes

1,000

Nonprobability sample

1,000

Total survey completes

2,000

a Based on author’s calculation. Estimated as total U.S. population ages 18 to 65 (U.S. Census Bureau, 2021; Administration for Community Living, 2021) multiplied by the share of people receiving either Medicaid, CHIP, SNAP, or CCDF. The share of people receiving Medicaid is estimated as the number of individuals receiving Medicaid or CHIP (Medicaid.gov, 2022) plus the number of individuals receiving CCDF (Administration for Children and Families, 2018) plus the number of individuals receiving SNAP (Cronquist 2019), adjusted for estimated program overlap (U.S. Bureau of Labor Statistics 2021; Davidoff et al 2001).

b Estimated as 30,000 times the share of individuals ages 18 to 65 in the U.S. population, times the estimated share of individuals receiving either Medicaid, CHIP, SNAP, or CCDF.



B.2. Procedures for collecting information

a. Statistical methodology for stratification and sample selection

NORC will field the survey on a sample consisting of a mixture of AmeriSpeak (probability sample) participants and a nonprobability sample. The study team will use different sample sources because benefits recipients are a low-incidence rate population that might be difficult to recruit efficiently using exclusively probability-based methods.

b. NORC’s AmeriSpeak panel sampling procedures

The AmeriSpeak panel is constructed from NORC’s National Frame, an area probability sample used for other NORC surveys such as the General Social Survey. Building off NORC’s National Frame begins with an area probability sample constructed using a two-stage probability sample design. Consisting of almost 3 million households in all 50 states and the District of Columbia, the frame includes more than 80,000 rural households whose information is not available in the U.S. Postal Service Delivery Sequence File (USPS DSF), but who NORC field staff identify through direct listing. NORC invites people to join the panel through mail, telephone, and in-person (face-to-face) contacts. The panel also recruits from an address-based sample from the USPS DSF in four states where the NORC National Frame has inadequate sample size.

NORC will select the sample for a specific study from the AmeriSpeak panel using sampling strata based on age, race and ethnicity, education, and gender (48 sampling strata in total). The population distribution for each stratum determines the size of the selected sample per sampling stratum. In addition, sample selection considers expected differential survey completion rates by demographic groups so the set of panel members with a completed interview for a study is a representative sample of the intended population. If a panel household has one more than one active adult panel member, only one adult in the household is eligible for selection (random within-household sampling). Panelists selected for an AmeriSpeak study earlier in the business week are not eligible for sample selection until the following business week.

NORC will randomly select panel members from those who reported earning less than 200 percent of the federal poverty level who have not yet participated in the maximum number of allowable studies that month. NORC will then screen participants for eligibility and include them in the study if they reported enrollment in one or more of the benefits programs of interest.

c. Nonprobability samples

Upon consultation with NORC, the study team expects our required sample size will exceed that available within the AmeriSpeak panel. The study team will augment the AmeriSpeak sample with a small nonprobability sample recruited from a market research panel provider. The study team will obtain the nonprobability sample used by NORC through Lucid, an online marketplace for buyers and sellers of research sample. NORC will assign calibration weights to responses from the nonprobability sample and blend them into the probability sample using calibrated multilevel regression to reduce bias while limiting design effects (for an overview, see Attachment B-2).

d. Data Analysis

The study aims to explore how acceptance of opportunities to increase wages vary as a function of (1) the reward (wage increase) of the opportunity (three levels: high net increase in earning with low marginal tax rate versus low net increase in earnings with low marginal tax rate versus high net increase in earnings with high marginal tax rate); (2) the risk that the opportunity will not work out, resulting in lost wages (two levels: high or low risk); and (3) the difficulty of regaining lost benefits (three levels: hard to regain, easy to regain, or never lost; Exhibit 2.1). As illustrated in Exhibit 2.1, our study will consist of a total of 18 (3 x 2 x 3 levels) possible combinations of treatments, selected from all possible factor combinations.

Exhibit 2.1. Factors and factor levels to be tested

Factor

Level 1

Level 2

Level 3

A. Earnings increase

Smaller net increase ($200) /lower marginal tax rate ($100 or 33 percent of gross earnings increase)

Larger net increase ($500) /lower marginal tax rate ($250 or 33 percent of gross earnings increase)

Smaller net increase ($200) /low marginal tax rate ($450 or 69 percent of earnings increase)

B. Risk of earnings loss

High – New job described as unstable

Low – New job described as stable


C. Loss and recovery of benefits

Difficult - If income declines, benefits must be reapplied for

Easy - If income declines, benefits are automatically reinstated

No loss - Benefits are maintained when earnings increase

The data structure is multi-level because each participant will rate five vignettes, selected from six substantively different job opportunities. Each vignette will include details representing a combination of treatments, selected without replacement from the 18 possible factor combinations. An earlier pilot study validated acceptance rates for the proposed earnings increases; perceived level of risk for the high- and low-risk scenarios; and perceived ease of regaining benefits for the hard to regain, easy to regain and never lost benefits conditions.

To account for non-independence of observations across vignettes, the study team will analyze data using a random effects model. The model used will look much like a classical linear regression model with main effects for each of the factors, interaction effects that predict the effectiveness of each examined factor combination, and controls for observable characteristics and vignette narratives.

The study team will also compare whether treatment effects vary meaningfully across subgroups, by including terms representing each subgroup in the model and allowing these terms to interact with experimental factors. For simplicity, the study team will not fully saturate the model, and only allow demographic characteristics to interact with main effects, but not interactions of experimental factors. To ease interpretation, the study team will report the results of both the full model and a reduced model that drops nonsignificant coefficients, providing that doing so does not reduce model fit (for a discussion, see Welch et al., 2014).

The study team will test whether differences in decision making by factor are statistically significant from zero at conventional levels of statistical significance. We will applying the Benjamini-Hochberg procedure to limit the false-discovery associated with testing of multiple hypotheses (Benjamini and Hochberg, 1995).

e. Degree of accuracy needed

This exploratory research intends to understand the relative impact of job and policy characteristics on choice and assess the heterogeneity of these effects across benefits and beneficiary subgroups. To achieve this goal, it is important to correctly define the sign and approximate magnitude of each effect. It is less important to precisely estimate the impact of each effect.

To ensure our proposed sample size is large enough to successfully identify meaningful interactions between factors and subgroups, the study team evaluated the statistical power of the design using minimum detectable effects sizes, or MDE. The MDE represents the smallest impact that the study team has the statistical precision to detect with 80 percent probability. The study team measures the MDE in terms of the impact, in percentage points, on the share of respondents agreeing they would take the income increase proposed in the vignette. To estimate MDEs, the study team uses a simulation-based approach in which the study team simulates data, fit the appropriate models, and assess the statistical precision of the estimates, reflected in the standard error of the coefficients on the terms of interest. The study team uses the estimated standard errors to calculate MDEs.

One key aspect of the design is that it is within-subject, meaning each respondent will be exposed to multiple experimental conditions, and comparisons are made between the different answers the respondent gave. Additional measurements generally improve statistical power by controlling away individual idiosyncrasies, in effect allowing us to model intra-individual differences and remove them from estimates of uncertainty about treatment effects. However, the extent to which repeated measurements improve statistical power depends highly on the consistency with which individuals rate different scenarios. In our analysis, the study team uses the intra-individual rating consistency observed in the pilot study as an estimate of the likely intra-individual choice consistency in this study. Exhibit 2.2 displays the results of this simulation.

Exhibit 2.2. Power analysis

Sample source

Completed surveys

Minimum detectable effect, percentage points

Main-effect of factor

Factor–factor interactions

Working–factor effects

Race–factor effects

Program–factor effects

NORC

2,000

0.041

0.109

0.117

0.106

0.128

Note: Minimum detectable effects are calculated for 80 percent power, using a significance level of .05 and a Šidák correction to control the family-wise error rate. The study team assumes there are no precision gains from covariates, and each respondent completes six vignettes with an intra-class correlation of 0.14. MDEs for subgroups may vary by level; the study team reports the largest MDEs in a subgroup.

B.3. Methods to maximize response rates and deal with nonresponse

NORC will handle the invitation email and all subsequent contacts with the panel participants following the standard operating procedures of the AmeriSpeak panel. Data collection for this survey will begin with an email notifying qualifying panel members that there is a new survey for them to complete (see Attachment B-3: Survey Invitation Email). Participants will access their survey via a link in the notification email. Panelists can also view their survey invitation at the online member portal or on the AmeriSpeak app.

The analytical data file will contain responses from a blended sample of AmeriSpeak participants and participants drawn from a market research panel. Response rates are unknowable for market research panel participants but are measured within the AmeriSpeak panel.

NORC will maximize response rates by conducting extensive nonresponse follow-up when empaneling participants, including sending nonresponse follow-up emails (Attachment B-4). To further improve response rates, survey respondents receive AmeriPoints for participating in surveys, which they can redeem for cash or physical goods (see Supporting Statement Part A, Section 9).

NORC maintains strict rules to limit respondents’ burden and reduce the risk of panel fatigue. On average, AmeriSpeak panel members typically participate in AmeriSpeak web- or phone-based studies two or three times a month. To further improve response rates, and ensure sample diversity, the study team will collect data for this study using an optimized-for-mobile web-survey fielded in English and Spanish.

Based on the pilot study and prior experience with online panels, the study team expects item nonresponse will be negligible. However, the study will employ best practices for reducing item nonresponse, including keeping the survey brief and engaging, providing an incentive, and using soft and hard checks to ensure participants complete critical measures.

Nonrespondents will receive reminders through that format as well. NORC will make reminder emails to encourage AmeriSpeak panel members to complete their survey via the web and, if they wish to complete the survey by telephone at that time, they may do so. Nonrespondents will also receive a postcard encouraging and reminding them to complete the survey. The study team expects the survey to take about 15 minutes to complete.

The study team expects the AAPOR R3 response rate for AmeriSpeak participants recruited into this study will be about 15 percent. This response rate considers panel recruitment rate (34 percent based on the 2014–2018 AAPOR RR3 weighted rate), panel attrition (15 percent annually), and the historic average survey participation rate. NORC will calculate and report unit and item nonresponse rates and carry out a nonresponse bias analysis following the guidelines in Standard 3.2 of the OMB Standards and Guidelines for Statistical Surveys (OMB, 2006). NORC will assess and measure nonresponse bias by evaluating the demographic and geographic representativeness of the core and blended samples compared to the Current Population Survey population benchmarks.

Differential attrition (that is, item nonresponse that differs across treatment levels) is another challenge faced by randomized controlled trials. The study team will evaluate nonresponse to items as a function of the experimental condition and ensure the combined rates of overall and between-condition rates of attrition fall within the bounds used to define high-quality studies (Deke and Chiang 2017) in federal evidence clearinghouses.

B.4. Test of procedures or methods undertaken

a. Developing the data collection instrument

The instrument to be used in this study was developed and tested under a generic OMB clearance request (0990-0281). The test recruited non-probability sample of 200 participants and was intended to identify potential problems with the design. Specifically, the study team measured the following:

  • Comprehension of vignettes through items testing factual understanding of vignette contents and assessing whether the risk treatment influenced the perceived riskiness of jobs and the ease of regaining benefits treatment influenced the perceived ease of regaining benefits.

  • Choices made for each vignette to ensure respondents perceived the levels of described risk and ease of regaining benefits as intended

  • Choices made for each of the 14 combinations of earnings increases and benefits losses to evaluate whether different plausible earnings increases and marginal tax rates resulted in overly high or low uptake of benefits (which would reduce statistical power)

  • The within-individual correlation between responses to different vignetted (used to inform decisions about sample size)

After finalizing the instrument, the study team programmed the survey instrument for administration via computer-assisted web methods. Before deployment, the team tested the survey instrument to ensure it functioned as designed. This included extensive manual testing for skip patterns, fills, and other logic. To reduce data entry errors, numerical entries were checked against an acceptable range, and, where appropriate, prompts were presented for valid but unlikely values. This testing will increase the accuracy of data collected while minimizing respondent burden.

B.5. Contacts for statistical aspects and data collection

The individuals listed in Exhibit 5.1 were consulted on statistical aspects of this study. For this study:

  • ASPE is responsible for oversight of the project.

  • Mathematica and ASPE are responsible for the design of the research.

  • Mathematica and NORC are responsible for implementation.

  • ASPE, in collaboration with a to-be-selected contractor, will be responsible for analyzing data, preparing report(s) on findings, and presenting and delivering those findings to ASPE.

Exhibit 5.1. Contacts for statistical aspects and data collection

Team member

Organization

Phone

Email

Nina Chien

ASPE

(202) 795-7667

Nina.Chien@hhs.gov

Ariella Spitzer

Mathematica

(617) 588-6744

aspitzer@mathematica-mpr.org

Dan Thal

Mathematica

(617) 674-8369

dthal@mathematica-mpr.org

Jesse Chandler

Mathematica

(734) 305-3088

jchandler@mathematica-mpr.org

Suzanne Howard

NORC

(312) 759-5244

howard-suzanne1@norc.org


ASPE staff will neither collect data from nor interact with research participants. No individual identifiers will be linkable to collected data, and no individually identifiable private information will be shared with or accessible by ASPE staff.

References

Administration for Children and Families. “CCDF Quick Facts: FY2018 Data.” Available at https://www.acf.hhs.gov/sites/default/files/documents/occ/ccdf_quick_facts_fy2018.pdf. Accessed March 10, 2021.

Administration for Community Living. (2021). “2020 Profile of Older Americans.” Available at: https://acl.gov/sites/default/files/Aging%20and%20Disability%20in%20America/2020ProfileOlderAmericans.Final_.pdf. Accessed on March 10, 2022.

Benjamini, Y. and Hochberg, Y. (1995). “Controlling the false discovery rate: A practical and powerful approach to multiple testing.” J. Roy. Statist. Soc. Ser. B 57 289–300. MR1325392

Cronquist, K. “Characteristics of Supplemental Nutrition Assistance Program Households: Fiscal Year 2018. Supplemental Nutrition Assistance Program Nutrition Assistance Program Report Series, Report No. SNAP-19-CHAR.” U.S. Department of Agriculture, Food and Nutrition Service, Office of Policy Support, November 2019.

Davidoff, A., B. Garrett, and A. Yemane. “Medicaid-Eligible Adults Who Are Not Enrolled, Who Are They and Do They Get the Care They Need?” New Federalism. Series A, No A-48. The Urban Institute, October 2001.

Deke, J., & Chiang, H. (2017). “The WWC attrition standard: sensitivity to assumptions and opportunities for refining and adapting to new contexts.” Evaluation review, 41(2), 130-154.

Gray, C. “Leaving benefits on the table: Evidence from SNAP.” Journal of Public Economics, vol. 179, no 104054, November 2019.

Medicaid.gov. “January 2022 Medicaid & Chip Enrollment Data highlights.” January 2022. Available at https://www.medicaid.gov/medicaid/program-information/medicaid-and-chip-enrollment-data/report-highlights/index.html. Accessed March 10, 2022.

Office of Management and Budget. (2006). Office of Management and Budget Standards and Guidelines for Statistical Surveys. Available at: https://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/standards_stat_surveys.pdf

U.S. Bureau of Labor Statistics. “Monthly Labor Review.” January 2018. Available at https://www.bls.gov/opub/mlr/2018/article/program-participation-and-spending-patterns-of-families-receiving-means-tested-assistance.htm. Accessed March 10, 2021.

U.S. Census Bureau. “Population Under Age 18 Declined Last Decade.” August 2021. Available at: https://www.census.gov/library/stories/2021/08/united-states-adult-population-grew-faster-than-nations-total-population-from-2010-to-2020.html#:~:text=By%20comparison%2C%20the%20younger%20population,from%2074.2%20million%20in%202010. Accessed March 10, 2022.

West, B.T., Welch, K.B., & Galecki, A.T. (2014). “Linear mixed models: a practical guide using statistical software.” Chapman and Hall/CRC.

Section B – List of Attachments

  • Attachment B-1: Technical overview of AmeriSpeak panel

  • Attachment B-2: TrueNorth White Paper

  • Attachment B-3: Survey Invitation email

  • Attachment B-4: Survey nonresponse email

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSarah White
File Modified0000-00-00
File Created2022-05-27

© 2024 OMB.report | Privacy Policy