8-30-18 Rent Reform follow-up survey Part B

8-30-18 Rent Reform follow-up survey Part B.docx

Rent Reform Demonstration TO2: 36-Month Follow-Up Survey and Comprehensive Impact

OMB: 2528-0306

Document [docx]
Download: docx | pdf


Supporting Statement for Paperwork Reduction Act Submission

Rent Reform Demonstration: Long-Term Follow-Up Survey

OMB # 2528-0306



Part B. Justification

  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

There are four MTW public housing agencies (PHAs) participating in the Rent Reform Demonstration – Lexington Housing Authority, Louisville Metropolitan Housing Authority, San Antonio Housing Authority, and District of Columbia Housing Authority. In 2015, eligible households were randomly assigned to either the New Rent Rules group (Program/Treatment group) or the Existing Rent Rules group (Control group). The respondent universe for the long-term follow-up survey is comprised of all 6,659 households included in the impact analysis sample as shown in Table 1.


Table 1. Sample Size by Site and Research Group


Site

New Rent Rules

(Program/Treatment)

Existing Rent Rules

(Control)

Total Sample

Lexington, KY

486

493

979

Louisville, KY

947

961

1,908

San Antonio, TX

935

934

1,869

Washington, DC

941

962

1,903


3,309

3,350

6,659


The expected response rate is 75 percent for both program and control groups.


Response rates for the Baseline Information Form (BIF) conducted in 2015 and 2016 during the demonstration enrollment/implementation period ranged from 71 percent to 89 percent across sites, with an overall response rate of 79 percent.



  1. Describe the procedures for the collection of information including:


  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.



Decision Information Resources (DIR) will collect long-term follow-up survey data from the full study sample of Rent Reform Demonstration study participants beginning with a Web option, followed by outbound Computer Assisted Telephone Interview (CATI) dialing, and culminating with a field follow-up. There is no statistical methodology for stratification and sample selection as the sample of study participants has already been enrolled in the Rent Reform Demonstration. (See pgs. 25-27 in Reducing Work Disincentives in the Housing Choice Voucher Program: Rent Reform Demonstration Baseline Report for a description of the study sample and eligibility criteria.)1

The DIR survey team will inform study participants about the long-term follow-up survey through an introductory letter. After OMB approval, but prior to data collection, the survey team will send a double-sided bi-lingual flyer along with a $2 cash prepaid incentive, requesting updated contact information. They will also send additional tracking postcards and emails to nonrespondents approximately 6-8 weeks after survey launch in an effort to locate respondents and maximize response rates. The long-term follow-up survey will only happen once, which reduces burden as there is no annual data collection cycle.


Web Protocols. The survey team will send study participants an initial invitation (via email and USPS mail) to participate in the self-administered web survey. The initial invitation will include information about the long-term follow-up survey, the respondent’s rights as a participant, contact information for DIRs study-specific toll-free number, and a web link and password for accessing the online version of the survey. The invitation will also inform study participants that if they complete the survey by the specified two-week date, they will receive an early-bird incentive of $10, in addition to the $50 incentive we propose to offer all respondents. This early-bird incentive is designed to increase response rates and reduce the cost of following up with nonrespondents. The survey team will send a reminder post card approximately seven days in advance of the early-bird incentive date specified in the initial mailing to further increase response rates. The self- administered web option will remain available to all respondents for the duration of their data collection period. Study participants with valid USPS or email addresses will be given two weeks to complete via the web option before outbound CATI calls are initiated.


Telephone Interview Protocols. The survey team will begin outbound dialing immediately upon the start of their survey window to those study participants without valid email or USPS addresses, which we estimate to be 5 to 10 percent. Participants that do have addresses but do not complete the long-term follow-up survey in advance of the early-bird web deadline will become eligible for outbound CATI dialing upon expiration of the early-bird deadline.


Field-Assisted Interview Protocols. Site-based field locators, who will work to locate sample members and scheduled them for telephone interviews, will be assigned to all study participants who have not completed the survey after 10 unsuccessful outbound call attempts, within 6 weeks after survey launch. Each locator will be assigned a geographically clustered group of cases and, beginning with existing contact information, seek to find the sampled respondent. Once the sample member is contacted, the field locator will arrange for him or her to complete the survey via telephone with a CATI interviewer.



Statistical Impact Analysis

The basic estimation strategy for the long-term follow-up survey data is analogous to the methodology MDRC is using for the 6,659 sample members included in the current Rent Reform Demonstration Task Order 2 (TO2) impact analysis (and methods other social science researchers have used in social experiments over the last few decades to generate credible results). The analysis will compare average outcomes for the respondents subject to the alternative rent policy (the Program/Treatment group) and the current rules group (or the Control group), and will use regression adjustments to increase the precision of the statistical estimates that are performed. In making these adjustments, for example, an outcome from the long-term follow-up survey, such as hours workedor moved” will be regressed on an indicator for intervention group status and a range of other background characteristics. The following basic impact model would be used:




Yi = α + βPi + δXi + εi




where: Yi = the outcome measure for sample member i; Pi = one for program (or treatment) group members and zero for control group members; Xi = a set of background characteristics for sample member i; εi = a random error term for sample member i; β= the estimate of the impact of the program on the average value of the outcome; α=the intercept of the regression; and δ = the set of regression coefficients for the background characteristics.


The survey analysis will examine many outcomes across a number of domains. When multiple outcomes are examined, the probability of finding statistically significant effects increases, even when the intervention has no effect. For example, if 10 outcomes are examined in a study of an ineffective treatment, it is likely that one of them will be statistically significant at the 10 percent level only by chance. While the statistical community has not reached consensus on the appropriate method of correcting for this problem, MDRC will address it by identifying a set of primary outcomes versus secondary outcomes and give priority to statistically significant findings that are part of a pattern over those that appear to be isolated statistically significant effects.


Site-specific and pooled impacts


Consistent with the approach used for the TO2 administrative records analysis, the survey impact analysis will estimate the effects of the alternative rent model for each site separately and for all sites combined. MDRC has estimated Minimum Detectable Effects (MDEs) for selected outcomes: employment, earnings, and housing-related hardship shown below in Table 2. These estimates suggest that the sample sizes at each housing authority provide adequate statistical power for producing policy-relevant site-specific impact estimates.


Site-specific estimates are important because they will allow the analysis to test the robustness” of the alternative rent model; that is, each site will provide a type of independent replication test. If the results show that the model’s impacts are positive and consistent across these locations, it would provide evidence that the model can succeed under a variety of locations and for different types of tenants. Alternatively, if large and statistically significant variations in sites’ impacts emerge, it will be important to try to explore what local conditions and/or implementation factors might be generating that variation in the model’s effectiveness.


Pooled impact estimates, which will show the effects of the alternative rent policy across all four sites combined, are also important. A pooled analysis will provide a summary estimate of the overall effects of the policy across the demonstration sites. And because of its larger sample size, the pooled analysis will have more statistical power and yield more precise impact estimates, especially for subgroups of the voucher population. Of course, a pooled analysis will be especially helpful if the effects of the alternative rent policy are generally similar across all of the sites. If those effects differ dramatically, a pooled estimate may be misleading. In that case, the evaluation will give more attention to the site-specific findings, and to understanding why the effects of the policy vary by site. The analyses being conducted as part of TO2 will begin to inform these types of decisions.


One other consideration that affects the pooled analysis is whether Louisville presents a particular concern. MDRC is finalizing the approach for the TO2 impact analysis, and a consistent approach would be applied to the long-term follow-up survey analysis and the longer-term impact analyses. As background, under a special agreement between HUD and the housing agency, the demonstration allowed households assigned to the alternative rent policy in Louisville to opt out of the new policy. A total of 212 families (or 22 percent) of the Louisville households exercised this option. Based on these opt-out rates, it is still possible to estimate unbiased impacts. With most families opting out of the new policy remaining in the evaluation, it means, in essence, that the treatment effects would be somewhat diluted (since not all members of the program group receive the treatment a common situation in experimental tests of social interventions), but the intent-to-treat” (ITT) impact estimates would not be biased. It may also be possible to apply special statistical methods to estimate unbiased treatment-on-treated(TOT) effects, because there is little reason to expect that the alternative rent rules would affect the labor market behavior of households who opt out of that policy.







Table 2. Sample Sizes and Minimum Detectable Effects (MDEs)


  1. MDEs for Employment


Site


N

Percentage Points

%

Chg


Lexington, KY

979

7.48

17.0


Louisville, KY

1,908

5.36

12.2


San Antonio, TX

1,869

5.42

12.3


Washington, DC

1,903

5.37

12.2


Pooled

6,659

2.87

6.5


  1. MDEs for Annual Earnings


Site



N

Dollars

%

Chg


Lexington, KY

979

$1,071

15.3


Louisville, KY

1,908

$767

11.0


San Antonio, TX

1,869

$775

11.1


Washington, DC

1,903

$768

11.0


Pooled

6,659

$410

5.9


  1. MDE’s for Housing Hardship


Site



N

Percentage Points

%

Chg


Lexington, KY

979

6.03

30.2


Louisville, KY

1,908

4.32

21.6


San Antonio, TX

1,869

4.37

21.9


Washington, DC

1,903

4.33

21.7


Pooled

6,659

2.31

11.6





Sample size: N = Full sample (control + program group)


Assumptions: Control group levels are assumed to be: 44 percent for employment and $7,000 for mean annual earnings, and $7,100 for the standard deviation of annual earnings. MDE calculation for 2-tailed test at 10 percent significance and 80 percent statistical power. Calculations assume that the R-squared for each impact equation is .10.



Subgroup impact analysis


Both theory and findings from other evaluations of similar programs (e.g., those that tested work incentives for low-income populations and for voucher recipients in particular), suggest that changes to the rent structure may have different effects for different types of families. For example, the alternative rent model may have larger effects on tenants who are not employed at the time of their recertification interview, or working part time, since it is often easier for individuals to increase their hours in work than for those already working to advance to higher-wage jobs. The new policy may also have different effects depending on a tenant’s barriers to work or preparation to work. Looking across a number of subgroups, the evaluation will investigate whether changes in the rent structure have more pronounced or different effects on key survey-based outcome measures, as well as on the key outcomes based on the longer-term administrative records data.2

The confirmatory subgroups were specified in advance in order to avoid the potential for data mining and the problem of multiple comparisons. Subgroups can be chosen as confirmatory because prior theory predicts that program differences will vary by a subgroup dimension, because differences in impacts by a given dimension have been found in prior evaluations, or because a given subgroup is of great policy interest.

The subgroup analysis will prioritize baseline measures from the administrative records data. Other MDRC studies have relied on BIF data to define the subgroups, but BIF completion rates were somewhat lower for the study sites (89 percent in Lexington, 82 percent in Louisville, 71 percent in San Antonio, and 79 percent in Washington, DC), making them less complete for subgroup definition purposes.


  1. Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.

In addition to using incentive payments to help maximize response rates, we will contact study participants using various outreach materials (e.g., advance letters, email evites, brochures, and follow-up letters) to introduce the importance of the study and the contribution their participation will make in rent reform policy for the future. We also maximize response rates by providing options for completing the surveys (web and telephone) and by maintaining contact with the sample members through various modes across many attempts. We will follow up our attempts to complete surveys via web with email and telephone invitations, as well as hardcopy mailings of invitation letters and reminders. During nonresponse follow-up, we will use trained field staff to locate study participants who have not yet completed the survey and help them connect to the web instrument or call in to the CATI center to complete the survey. We have found that direct contact with respondents by phone following unsuccessful email and hardcopy invitations can often break through resistance and help to increase cooperation and response rates.


A high response rate will minimize potential bias in the survey data. As in other MDRC research studies, a response-bias analysis will be conducted and will focus on the following: Comparisons of survey response rates by research group; Formal (regression-based) and informal (crosstabs and means) analyses of differences in background characteristics among program- and control-group respondents; and Comparisons of estimates of program effects on outcomes calculated with administrative data (for example, total earnings after random assignment, calculated with National Directory of New Hires (NDNH) Wage data) for the survey respondent sample and survey fielded sample (respondents and nonrespondents combined). Most likely, these analyses will be included in a technical appendix to the report with the survey findings, including recommendations for interpreting the results. Researchers and policymakers may have greater confidence in outcomes and program effects estimated from survey data when levels of response bias are low or moderate but should consider findings with caution when levels of survey response bias are high. In addition, weighting of results could be used as a method to limit response bias if there were significantly high levels of response bias detected in the survey response analysis.



  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


Many of the items in the proposed long-term follow-up survey have been successfully administered to low-income households in other large-scale studies of economic self-sufficiency such as HUD’s Family Self Sufficiency Program Demonstration. However, because the proposed survey is a compilation of items from multiple sources, a pilot test will be conducted prior to obtaining OMB clearance with up to 9 members of the Rent Reform Demonstration program group using DIR’s Computer-Assisted Telephone Interview (CATI) center.


Prior to pilot testing the survey, the instrument will be programmed and then tested using scripted mock scenarios and autotesting with a specific focus on all potential pathways and skip patterns. We will also train a small number of CATI interviewers using a study-specific training program that includes comprehensive written materials, role playing, and practice exercises. DIR will interview early Demonstration participants to pretest the instrument, its timing, and other attributes. Pretesting the instrument will help ensure that the wording and flow of questions work as intended.


  1. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractors, grantees, or other person(s) who will actually collect or analyze the information for the agency.


HUD’s Office of Policy Development and Research will work with the contractor, Decision Information Researches (DIR) and MDRC, to conduct and analyze the proposed data collection. Marina L. Myhre, Ph.D., a Social Science Analyst in HUD’s Office of Policy Development and Research, Program Evaluation Division, serves as Contracting Officer’s Technical Representative (COTR). Her supervisor is Ms. Carol Star. Dr. Myhre and Ms. Star can be contacted at (202) 402-5705 and (202) 402-6139, respectively. DIR is under contract to HUD to conduct the Rent Reform Demonstration long-term follow-up survey. The DIR survey team is led by Dr. Sylvia Epps, project director. Other members of the survey team that worked on the survey protocol design include Ms. Heather Morrison, Ms. Monica Schneider, Mr. Lenin Williams, and Dr. Carol Pistorino. MDRC is under contract to HUD with its subcontractors (Urban Institute and Dr. Ingrid Gould-Ellen) to analyze the data collected in conjunction with the administrative data being collected as part of the Rent Reform Demonstration impact analysis TO2 contract.


The statistical aspects of the study were developed by the MDRC study team, in consultation with former MDRC colleague, Dr. Stephen Nunez, and MDRC senior economist and impact analyst, Dr. Cynthia Miller.


We provide the following contact information for these individuals:


Sylvia Epps

Project Director

Decision Information Resources, Inc.

832-485-3730


Carol Pistorino

Senior Advisor

Decision Information Resources, Inc.

832-485-3734



2 The confirmatory subgroups are tenants’ work status/history at the time of random assignment; whether the household head is a single parent with no other adult in the household and is also not employed; whether the household is receiving SNAP benefits; and whether it is receiving TANF benefits. Exploratory subgroups are length of time receiving housing subsidies; the number and ages of non-adult children; adults’ education levels; household income levels; and whether the household includes children age 5 and under.


8


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOMB Package Part B
Authornunez
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy