Download:
pdf |
pdfB.
COLLECTIONS OF INFORMATION EMPLOYING
STATISTICAL METHODS
1.
Respondent universe and sampling methods
Provide a numerical estimate of the potential respondent universe and describe any sampling or
other respondent selection method to be used. Data on the number of entities (e.g., households or
persons) in the universe and the corresponding sample are to be provided in tabular format for
the universe as a whole and for each stratum. Indicate expected response rates. If this has been
conducted previously include actual response rates achieved.
The universe includes all assisted housing projects and households located in the continental
United States, Alaska, Hawaii, and Puerto Rico. The following housing programs will be
included in the sample:
Public Housing (including Moving to Work [MTW])
PHA-administered Section 8 (Vouchers and Moderate Rehabilitation, including MTW)
Owner-administered Section 8, Section 202 Project Rental Assistance Contract (PRAC),
Section 811 PRAC, Section 202/162 Project Assistance Contract (PAC)
The QC Study sample will be designed to obtain a 95% likelihood that estimated aggregate
national rent errors for all programs are within 2 percentage points of the true population rent
calculation error, assuming an error of 10% of the total rents (based on the statement of work
[SOW]). Table B1.1 presents an example from the FY 2012 QC Study sample—the number of
projects and units by HUD region, their expected number of PSUs, and the number actually
sampled. For the FY 2012 study, 59 distinct PSUs were selected. One PSU had expectations
greater than 1.0, and was selected twice.
Table B1.1. Number of Projects and Units in Sampling Frame
by HUD Region for FY 2012
HUD
Region
PIHAdmin
Sec 8
US
13,922
7,204
1
1,065
2
Public
OwnerHousing Administered
Expected Actual
PSU
PSU
Sample Sample
Total
PIH-Admin
Sec 8
Public
Housing
OwnerAdministered
Total
19,944
41,070
2,195,755
1,077,747
1,362,775
4,636,277
60.0
60
481
1,791
3,337
150,152
67,804
122,575
340,531
4.42
4
1,673
616
1,671
3,960
293,947
242,101
163,205
699,253
9.57
10
3
1,203
781
1,961
3,945
192,594
110,019
152,636
455,249
6.04
6
4
2,592
1,975
3,695
8,262
381,777
274,734
242,442
898,953
12.13
12
5
2,102
1,183
4,251
7,536
324,379
158,065
302,274
784,718
10.32
11
6
1,676
988
1,763
4,427
253,925
102,254
111,445
467,624
5.85
5
7
658
473
1,198
2,329
83,443
35,880
61,423
180,746
2.33
2
8
508
160
850
1,518
63,476
15,705
38,475
117,656
1.43
2
9
1,873
321
1,899
4,093
355,779
49,723
132,554
538,056
6.11
6
10
572
226
865
1,663
96,283
21,462
35,746
153,491
1.8
2
OMB Clearance Package
16
February 10, 2014
In previous studies, the household sample size of 2,400 has shown to be an acceptable precision
for estimates of the total average error. Table B1.2 shows the expected number of sampled
projects and households by housing program type for the FY 2013 study.
Table B1.2. Number of Sampled Projects and Tenants by Program Type for FY 2013
Number of
Projects
Number of
Tenants
Public Housing
200
800
PHA-Administered Section 8
200
800
Owner-Administered
200
800
Total
600
2400
Program Type
Response Rates
Three types of non-response may effect this data collection: that by PHAs/owners, tenants and
third-party entities.
PHAs/owners
Project-Specific Information
Participation by selected PHAs/owners is mandatory such that their contracts with HUD require
their participation in studies of this type. In the FY 2012 study all PHAs/owners completed the
Project-Specific Information Form resulting in a 100 percent response rate. We anticipate a
similar response rate for the upcoming studies.
Project Staff Questionnaire
Participation by selected PHAs/owners is mandatory such that their contracts with HUD require
their participation in studies of this type. For the FY 2012 study, 548 of the 554 PHAs/owners
completed the Project Staff Questionnaire resulting in a 99 percent response rate.
Tenants
Participation by selected tenants is mandatory; refusal to participate could result in their
termination of assistance. In the FY 2012 study, 246 tenants were non-responsive out of 2,404
total tenants, resulting in a 90 percent tenant response rate.
The most common reason for tenant non-response was that they moved out before ICF Macro
abstracted data from the household file. Other common reasons for replacement included: 1) the
tenant refused to participate in the study, 2) legal eviction proceedings were occurring for the
tenants, and 3) the tenants were away for extended periods and could not be contacted for an
interview during the four month data collection window. Field interviewers are required to make
at least four in-person contacts with the tenant to conduct interviews with individuals who try to
evade the interview. For the FY 2013 study a similar tenant non-response rate is anticipated.
Study time limits and budget constraints do not allow us to further pursue tenants who evade,
refuse or are away during the data collection period.
OMB Clearance Package
17
February 10, 2014
Third-Party Entities
Third-party entities are not required to complete our request for verification information. In the
FY 2012 data collection cycle ICF Macro obtained 2,247 forms out of 2,807 requested for an 80
percent response rate. We anticipate a similar response rate for the FY 2013 study.
2.
Procedures for collection of information
Describe the procedures for the collection of information, including: Statistical methodology for
stratification and sample selection; the estimation procedure; the degree of accuracy needed for
the purpose in the proposed justification; any unusual problems requiring specialized sampling
procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce
burden.
Basic Cluster Design
Two levels of clustering will be used in this study:
Projects clustered within PSUs, which are generally groups of counties
Households clustered within projects
A sample of 60 PSUs will be designed, with 10 projects per PSU and four households per project
(allowing PSUs and projects to be selected more than once if sufficiently large). The design calls
for equal allocation of the three HUD programs: 200 Public Housing, 200 PIH-administered
Section 8, and 200 Owner-administered projects. Early study samples were designed to yield the
expectation of the same number of households for each program type, but for the last several
years the design was modified so it would select exactly the same number of households per
program. One additional project has been added to Public Housing to ensure contractual
compliance in the event that something prevents data from one project to be properly collected or
processed.
Definition, Allocation, and Sampling of Clusters
Source Files Used for Sample Selection
The source files for the FY 2013 study are currently being reviewed. Based on previous
experience with the types and numbers of files typically provided in past years, we expect to
receive similar information.
OWNER-ADMINISTERED PROJECTS. HUD provided one file of information on Owner-administered
projects. One file had a record for each property, including the address. Certain types of contracts
were excluded from the files because the rent calculation rules used for these contracts are outside the
scope of this study; these include SUPP, RAP, and service coordinator contracts.
VOUCHER AND MODERATE REHABILITATION PROJECTS. HUD provided two files that
contained information on Voucher and Moderate Rehabilitation Project households. One file
contained household-level information, including county geographic information. The second
file contained PHA-level information. Out-of-State households (households with transport
OMB Clearance Package
18
February 10, 2014
vouchers who used them in another state) will be eliminated from the frame. At HUD’s request,
starting in FY 2012, all MTW PHAs will be included in the frame.
PUBLIC HOUSING PROJECTS. One Public Housing Project file was provided by HUD, and
included geographic information for all but a few projects. At HUD’s request, all MTW PHAs
will be included in the frame. The inclusion of MTW PHAs, which began in FY 2012 is a change
from previous studies, where MTW PHAs were excluded. As needed, we will use the county of
the PHA or the county from a previous year’s file to classify these Public Housing projects into
counties. Starting in FY 2012, we used the number of occupied units instead of the number of
assisted units as the measure of size for a project. This has greatly reduced the number of frame
issues that arise in the field due to project renovations and demolitions.
Across all program types, projects covering fewer than 10 units will be excluded. This exclusion
will take place to avoid unreasonable burden on especially small projects, and to increase the
efficiency of the data collection by decreasing travel to numerous small projects to collect the
2,400 cases. This change was implemented starting with the FY 2011 study. In previous years,
the number 14 was used. The number 14 was chosen at a time when 7 households were selected
per project to ensure there would be a sufficient number of replacements per project. However,
since only four households are needed from each project, a minimum of 10 households should
prove sufficient, while slightly improving the frame. In addition, any projects that are located in
Guam, the Northern Mariana Islands, and the Virgin Islands will be removed from the frame
because of their relatively small size and logistic issues. In Alaska there is only one PHA for
Public Housing and PIH-administered Section 8 projects, which is MTW. In previous years,
because the PHA was out-of-scope as a MTW PHA and the remaining Owner-administered
projects were small and fairly dispersed, Alaska was removed from the frame. With the inclusion
of MTW PHAs, Alaska will be included in the frame. Once the above files are processed, it will
be possible to estimate the number of households in each program in each county.
Sample Cluster Size
The clustering procedure will use counties as the initial cluster. Clusters will be restricted to
those with a minimum number of households and projects. In the FY 2012 study, the
requirements were 40 projects and 1,500 households, and at least 2 PHA/county combinations.
These numbers vary slightly from year to year, depending on the degree of clustering found in
the data files provided by HUD. For these purposes, vouchers will be counted as 1 project for the
first 300 households and as an additional project for every 200 households above that (e.g., 500
households would count as 2 voucher projects, but 501 would count as 3). When a county does
not meet the criterion, we will identify the nearest county in the same state and merge the two. A
total of 370 clusters were created for the FY 2012 study, 371 for the FY 2011 study, and 340 for
the FY 2010 study.
The clustering program has been highly effective in previous years’ efforts, except that
occasionally the resulting PSUs have been unnecessarily large. This has been resolved in the past
by a manual revision of PSUs after selection. We will use the new files to create PSUs, and will
examine the resulting PSUs to determine whether it is desirable to modify the resulting parameters.
OMB Clearance Package
19
February 10, 2014
We will select PSUs with probabilities proportional to size (PPS), a standard approach followed
in most national surveys. However, the study calls for an equal number of households to be
selected from each of the three major program types. To accomplish this, we will select PSUs
with a size measure calculated as the average of the proportions of households from each of the
three programs found in the PSU. The number of households in each program within a PSU will
be divided by the number of households nationwide. The three values will be averaged to create
a measure of size that sums to one.
The size measure will then be multiplied by 60—the number of PSUs to be selected—to obtain
the expectation of selection for each PSU. If this expectation is less than one, it will be
interpreted as the probability of selection of the PSU. If it is greater than one, the PSU will be
selected with certainty. The integer part of the expectation will indicate the minimum number of
times the PSU can be selected, and the fractional part will indicate the probability that the PSU
will be selected one additional time.
Sample Cluster Selection
The PSUs will be grouped within states and then within HUD-defined regions. Table B2.1
illustrates the classification of states, the District of Columbia, and Puerto Rico to HUD regions.
Table B2.1. Allocation of States to HUD Regions
HUD Region
States
1
CT, MA, ME, NH, RI, VT
2
NJ, NY
3
Washington DC, DE, MD, PA, VA, WV
4
AL, FL, GA, KY, MS, NC, Puerto Rico, SC, TN
5
IL, IN, MI, MN, OH, WI
6
AR, LA, NM, OK, TX
7
IA, KS, MO, NE
8
CO, MT, ND, SD, UT, WY
9
AZ, CA, HI, NV
10
AK, ID, OR, WA
States will be sorted in a random order within regions, and PSUs will be randomly sorted within
states. As the frame is prepared for the selection of PSUs, PSUs will be arranged in order, and
each assigned an expectation value. A random number will be generated as a starting point to
select the PSUs. A cumulative distribution of the expectations will be calculated by adding the
expectation of a PSU to the cumulative expectation of the previous one (starting with the random
number). Thus, the real numbers between 0 and 60 will be divided into segments, where each
PSU is represented by the segment between the cumulative expectation of the previous PSU (or
0 for the first PSU) and its cumulative expectation. A random number (x) between 0 and 1 will
be selected, and the integers from 0 to 59 will be added to the random number. The numbers x,
1 + x, 2 + x, and so on until 59 + x will define the selected PSUs. A PSU will be selected as
many times as one of these numbers falls into its corresponding segment.
OMB Clearance Package
20
February 10, 2014
This is essentially the Goodman-Kish approach (1950), but using sampling with minimal
replacement (Chromy, 1979). This procedure results in sample sizes roughly proportional to the
number of households in each region, but counting households in the smaller program types
more than those in the larger program types. Rather than allocate a number of clusters to each
region, this method implicitly stratifies the sample and essentially allows a fractional allocation.
In other words, if the expectation for a region should be 4.6 PSUs, it would have a 40% chance
of getting 4 and a 60% chance of getting 5.
In addition, once the PSUs are selected, the larger PSUs will be divided and one of the parts will
be selected with PPS. The decision whether or not to divide will be implemented subjectively,
using a map to determine data collection burden. Once a division is made, one of the parts will
be selected with PPS using the same combined size measure used in selecting the PSUs.
Allocation and Sampling of Projects
Over the last few years of quality control studies, different methodologies have been used in the
allocation and sampling of PHAs/projects. These methodologies have been employed to identify
an approach that continues to improve the evenness of probabilities of selection. As has been
done since the FY 2006 study, projects will be allocated to program types within PSU to ensure
the following conditions:
The number of projects per PSU will be 10 times the number of times the PSU was selected.
The number of projects per program type will be 200 nationwide (counting a project selected
multiple times by the number of times it is selected).
The number of projects to be selected in a PSU by program type will be approximately
proportional to the ratio of the number of households in that program type in the PSU,
and to the number of households in that program type in all the selected PSUs.
The third condition will require rounding, and an iterative process will be necessary to achieve
allocations that yield integers for all program-type cluster combinations.
For each program type, 200 projects will be selected nationwide. Although nationally the
Voucher program has significantly more units than the Public Housing or Owner-administered
programs, because each program type is sufficiently large on its own, each subdomain
approximates an infinite population, and the sample size does not need to increase to achieve a
95% confidence interval. In addition, the approach of selecting the same number of projects per
program type allows more precise estimates at the individual program-level type, along with
national estimates. While this approach does result in slightly less optimal total national
estimates, the estimates are expected to be within the 95% confidence interval.
These will be selected by first allocating a fractional number of projects to each sampling cell
(program type/PSU combination), and then using controlled rounding to make the rows add up to
10 projects per PSU and the columns to 200 projects per program. After obtaining the allocations
for FY 2013, a sample of projects will be selected from each sampling cell, with probabilities
proportional to the number of households. As in previous years, our methodology will allow
PHA-administered Section 8 projects to be selected more than once, but Public Housing and
Owner-administered projects will be selected only once. The same PPS systematic approach
OMB Clearance Package
21
February 10, 2014
used to select PSUs will be used to select projects. Projects will be sorted by program type,
county, and PHA prior to selection to ensure diversity.
Selection of Households
The initial household sample is designed to be self-weighting by program. The term self-weighting
refers to a sample where all households being sampled have the same weight, assuming that the
frame is accurate and 100% response is achieved. However, differences between the number of
occupied units found in a project and the number of units listed in the frame, along with the fact
that the contract requires that the major housing programs be represented in approximately equal
numbers, may lead to some deviations from a self-weighting sample by program; thus, the
household sample will be approximately self-weighting. To compensate for this issue, we will
make individual decisions by project once the project is sampled and its real size determined.
Consider the initial theory behind the sample. Let f be the fixed sampling rate desired for all
households in the Nation. Let pj be the overall probability that project j with Nj households is
selected. The needed number of households to be sampled (nj) from the project to equalize
weights is given by nj = fNj/pj. (We note that nj may be greater or less than n, the desired fixed
sample size.) As a practical matter, project sample size will not be permitted to vary in
accordance with this formula, as this would create highly disparate interviewer workloads. It
will, however, be allowed to vary if more than a two-to-one ratio between projected and actual
weight is discovered.
Because the selection of households will be completed at the PHA/owner site, the sampling
procedures need to accommodate a variety of possible situations related to the availability of
household lists and information. Interviewer procedures will provide instruction on how to select the
sample, and ICF headquarters staff will be available to provide sampling assistance to the field
interviewers by telephone. Because the selection of households will be done mostly onsite by the
field interviewers, procedures will accommodate a wide variety of possible situations and will be
simple to implement. A number of replacement households equal to the number of households
selected will also be sampled simultaneously. If a household is unavailable for an interview, it will be
replaced. However, some Public Housing households are flat rent cases. Since flat rent cases do not
need to be interviewed, they are never unavailable, and thus will not be selected as replacements for
unavailable households.
The optimal number of households per project is based on a cost ratio of two additional
households for each additional project, PSU intraclass correlation (), project cost (C), and
household cost (c):
opt. n = [(C(1-))/(c)]1/2
References for this formula can be obtained in Hanson, Hurwitz, and Madow (1953), formula
16.2. We estimate that adding a project would result in a cost comparable to adding two
households. In the FY 2003 study, we applied this formula and determined that a sample size of
2.74 households per project would be optimal. We chose four households per project in order to
preserve an acceptable measure of intra-project variance and to take advantage of the fact that
errors have a slight tendency to be concentrated in projects. In fact, we found in the FY 2007
OMB Clearance Package
22
February 10, 2014
study that the projects accounted for almost 6% of the variance in gross error, and this was
statistically significant (p < .001). We have used the same basic design since the FY 2003 study,
with minor modifications.
The optimal number of projects and households per cluster is a function of logistics. The same
two-to-one ratio that was applied to calculate the optimal number of households per project can be
used to define cost units. A cost unit is the cost of including a household in the survey. Cost units
are a function of the data collector’s time and other factors. Ten projects and 4 households per
project in a PSU produce 60 cost units (2 × 10 + 1 × 10 × 4 = 60). A design with 6 projects and 8
households per project would also have 60 cost units (2 × 6 + 1 × 6 × 8 = 60). Experience has
shown that greater than 60 cost units result in an impractical amount of work for one data collector
to handle. We believe that 60 cost units provide the best balance between logistical requirements
and design effect. Given these issues, we decided to sample 4 households per project, 10 projects
per cluster, and 60 clusters, for a total of 2,400 households.
Weighting
The procedure to determine the final weights involves several steps, including calculating the
project weight ( ); calculating the household weight ( ); accounting for ineligible households
( ); accounting for nonresponding households ( ); poststratifying ( ); and, finally, trimming
the weights.
Calculating the Project Weight (w1)
The first step to determine the final weights is calculating the project weight by compiling the
sampling probabilities calculated during the cluster and project sampling and the initial data
collection process. These probabilities will then be used to calculate each project’s probability of
selection. The probability of selection of a project will be the product of the following:
1) The probability of selection of the cluster
2) The probability of selection of the subcluster if the cluster was divided
3) The probability of selection of the project from its respective cluster
Each cluster will be sampled with probabilities proportional to size. The measure of size to be
used is the number of households adjusted to obtain equal expectation for the three major types
of programs in the study. The number of households of each program in a cluster will be
multiplied by an inflation factor to make all three numbers equal. The probability of selection of
the cluster
will be calculated in three steps. First, the proportion of the households in each of
the three programs in a particular cluster will be obtained. Next, these proportions will be
defined as the number of households in each program within a cluster, divided by the number
nationwide (program’s population count). Finally, the three proportions in each cluster will be
averaged and multiplied by 60, the number of clusters to be selected nationwide.
OMB Clearance Package
23
February 10, 2014
In some instances, clusters may be geographically too large to collect data in a cost-effective
manner. To accommodate this logistical problem, clusters may be divided into two or more
subclusters or smaller geographic areas. A subcluster will then be sampled from the group of
subclusters using probabilities proportional to size. This will result in the same probability that
would have ensued if the division had taken place before drawing the sample, or the probability
of selection of the subcluster
. If the cluster is not divided into smaller clusters, then the
subcluster probability of selection will be one. The formula to calculate the project weight
follows:
Clusters with probabilities greater than one may be selected more than once (sampling with
minimal replacement). These clusters are certainty clusters, in that their selection into the sample
is guaranteed. For the purposes of calculating the project weight, the certainty clusters’
probability of selection will be set to one.
The probability of selection of a project from its respective cluster (p3) will be calculated in two
steps. First, the number of households in a program type within a project will be divided by the
total number of households in a program type within the project’s cluster. This proportion will
then be multiplied by the number of projects in a program type to be selected from the cluster.
The PHA-administered Section 8 projects may have a probability greater than one for sampling
purposes (meaning they could be sampled more than once). However, for the other two major
program types, if the calculated probability exceeds one, it will be set to one and all the other
probabilities will be readjusted so that they added to the allocation for the program in the cluster.
For weighting purposes, probabilities greater than one among PHA-administered Section 8
projects will be set to one.
Calculating the Household Weight (w3)
The second step to determine the final weights will be to calculate the household weight. To do
this, the number of households in the project
and the number of households sampled from
the project
will be identified. The household probability of selection within the sampled
project is the number of sampled households divided by the number of households in the project
( ).
The household within project weight (
household within the sampled project:
OMB Clearance Package
) is the inverse of the probability of selecting the
24
February 10, 2014
The household base weight (
project weight:
) is the product of the project weight and the household within
Account for Ineligible Households (fe)
The third step in the weighting process will be to account for ineligible households within the
sampled project. To do this, the number of eligible sampled households
out of all the
households sampled will be needed. Then the ratio of eligible households over sampled
households, or the eligibility factor, will be calculated
:
The eligibility-adjusted household weight (
eligibility factor:
) is the household base weight multiplied by the
Account for Nonresponding Households (fn)
The fourth step in the weighting process is to account for nonresponding households within the
sampled project. To do this, the number of eligible households, the number of responding
households ( ), and the eligibility adjusted household weight will be needed. The sum of the
eligibility adjusted household weights for all eligible households in the project and the sum of
eligibility adjusted household weights for only the responding households in a project will then
be calculated. A nonresponse adjustment factor ( ) will be calculated:
The nonresponse, adjusted household weight ( ) will be the eligibility-adjusted household
weight multiplied by the nonresponse adjustment factor:
Poststratification (fp)
The fifth step in the weighting process is poststratification. The sample was designed to obtain
similar numbers of households in each of the following three program types:
a) Public Housing projects
OMB Clearance Package
25
February 10, 2014
b) PHA-administered Section 8 projects
c) Owner-administered projects
Population totals for each of the programs will be obtained from HUD. If HUD does not provide
population totals, the FY 2013 sampling frame population totals will be used. However, the
sampling frame totals may not correspond exactly to these population totals and may require
adjustments. The weights will then be adjusted to sum to the known external population totals, so
the sum of the weights will be the same even if a different sample had been selected. In the past,
this was due partially to special circumstances. Examples of special circumstances that have
occurred in the past include the exclusion of geographic areas affected by the 2005 hurricanes
and the exclusion of Owner-administered projects in Alaska from the frame, which were both
included during the weighting process. Alaska will be included in the frame in FY 2013, but may
not be selected.
To poststratify the weights, the nonresponse adjusted household weights within program type
will be summed to estimate the population totals from the HUD sample. For example, the sum of
weights for all Owner-administered households in the sample is an estimate of the total number
of Owner-administered households in the Nation. A poststratification factor
will be
calculated by dividing the known external population totals (
) by the estimated
population totals from the HUD sample (
):
A poststratification factor will be calculated for each program type. This factor will then be
multiplied to the household weight within each program type, ensuring the sum of the household
weights by program type is the same as the external population totals.
Trimming the Weights
The final step is to trim of the weights. Weights more than three times the median weight will be
set to three times the median weight, and all the weights will be readjusted. Large weights
usually result from incorrect frame information.
Variance Estimation
Standard errors will be obtained for a number of estimates using a delete-a-group Jackknife
procedure. This will be implemented using 20 replicate groups and creating 20 sets of replicate
weights. This procedure is available starting with SAS 9.2 and is considered more robust with
respect to design characteristics than the Taylor series method (Kott, 1998).
3.
Maximization of response rates
Describe methods used to maximize the response rate and to deal with issues of non-response.
The accuracy and reliability of information collected must be shown to be adequate for intended
uses. For collections based on sampling, a special justification must be provided for any
collection that will not yield “reliable” data that can be generalized to the universe studied.
OMB Clearance Package
26
February 10, 2014
Three types of non-response may effect this data collection: that by PHAs/owners, tenants and
third-party entities.
PHAs/owners
Participation by selected PHAs/owners is mandatory such that their contracts with HUD require
their participation in studies of this type. In an effort to ensure PHA/owner participation, an
initial study notification email is sent to them to inform them that have been selected for the
study. This e-mail is shortly followed by another e-mail asking for their responses to the Project
Specific Information Survey. PHAs/owners are given a date by which the information is needed
and if that time elapses, follow-up emails and telephone calls are made to obtain the needed
information. If further follow-up is required, a list of the non-responsive PHAs/owners are
provided to HUD and contacted by them as well. Appendix C contains study letters that are
provided to PHAs/owners at the outset of the study (i.e., Phase I).
Third-Party Entities
Third-party entities such as employers, financial institutions, state social service agencies,
medical providers and pharmacies are not mandated to provide the requested verification. After
the initial request via US mail or by the Work Number, and online employment verification
system, ICF staff conduct multiple waves of follow-up using telephone and fax methods.
Tenants
Participation by selected tenants is mandatory; refusal to participate could result in their
termination of housing assistance. Field interviewers will make at least four in-person contacts
with the tenant to conduct interviews with individuals who try to evade the interview. Appendix
D contains the letter that is provided to tenants regarding this study. In addition, the following
letter is occasionally used to encourage tenant participation.
OMB Clearance Package
27
February 10, 2014
Tenant Encouragement Letter
[Date]
Dear [Name],
On [Date] we provided you with a letter from the Department of Housing and Urban Development (HUD)
which explained the study ICF International is conducting for HUD; it informed you that you have been
randomly selected to participate in this study. Since then, our field interviewer [field interviewer name]
has been attempting to get in touch with you to schedule an interview.
HUD and the Federal Office of Management and Budget (OMB) have determined that persons who
receive housing assistance are required to participate in this study. For your information, the OMB
clearance number for this study is 2528-0203. Failure to participate is a basis for terminating your
housing assistance. Your local HUD office has been informed of, and is assisting with, this study.
It is essential that you contact us immediately to schedule an appointment for an interview. If you do not
contact us by [Date], we will be forced to report your lack of cooperation to HUD. Please call the
telephone number identified below to schedule your appointment with the field interviewer directly. If
the field interviewer is not available, call the supervisor listed below for assistance.
The purpose of the study is to learn more about the types of errors that occur during
determinations of eligibility and tenant rents. This information will be used to meet
Congressionally mandated reporting requirements related to the accuracy of rent calculations.
The interview will take from 40-60 minutes. Information collected by this study will be reported
as statistical summaries; however, individual information is shared with HUD headquarters and
may be made available to those normally responsible for your income and rent determinations.
If you have any general questions about the study, please call me at the toll free number listed below. If
you have questions about our authorization to conduct this study, you may call Dr. Yves Djoko, the
government project office, at 202-402-5851.
Thank you for your cooperation with this study.
Sincerely,
Melanie Koehn
Data Collection Manager
Interviewer:
Name
Phone Number
Name
877 - 392 - 9776
Toll Free Number
Supervisor:
Use this ID # when calling:
OMB Clearance Package
28
C/P/C
February 10, 2014
4.
Tests of procedures or methods
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an
effective means of refining collections to minimize burden and improve utility. Tests must be
approved if they call for answers to identical questions of 10 or more individuals.
Previous iterations of this data collection serve as the pretests for this data collection effort. As
mentioned previously, similar studies have been conducted in 2000 (data was collected for
actions taken in 1999 and early 2000) and enhanced for the FY 2003 through FY 2012 studies.
Before each data collection cycle, all changes or enhancements to the study are tested in an inhouse procedure that evaluates the administrative and computer systems-related aspects of the
study. Prepared case examples (those used in training our field interviewers) are abstracted and
entered into our data collection system. Additionally, mock household interview data is entered
into our data collection system and all associated administrative paperwork is created and
processed. Finally, tracking reports are produced to determine that our reporting system is in
place and accurate.
5.
Individuals consulted on statistical aspects of design
Provide the name and telephone number of individuals consulted on statistical aspects of the
design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will
actually collect and/or analyze the information for the agency.
ICF Macro Staff—Design and Data Collection
Mary K. Sistik, Project Director, (301) 572-0488
Dr. Sophia Zanakos, Deputy Project Director, (301) 572-0239
Dr. Pedro Saavedra, Senior Sampling Statistician, (301) 572-0273
OMB Clearance Package
29
February 10, 2014
File Type | application/pdf |
Author | JoAnn.M.Kuchak |
File Modified | 2014-02-12 |
File Created | 2014-02-12 |