Supporting_Statement Part B__random_audit_ETA_9162 9 10 2012

Supporting_Statement Part B__random_audit_ETA_9162 9 10 2012.docx

Unemployment Insurance Random Audit of Emergency Unemployment Compensation 2008 Claimants

OMB: 1205-0495

Document [docx]
Download: docx | pdf

B. Collections of Information Employing Statistical Methods


Background

Public Law 112-96, the Middle Class Tax Relief and Job Creation Act of 2012, compels states to perform random audits of the work search requirements for all claimants in the Emergency Unemployment Compensation Program of 2008 (EUC08). Prior to passage of this law, there was no such requirement, necessitating both the random audits themselves, and collection of data documenting state audit activities and results. More specifically, Section 2141(b)(2) of the Act (“Random Audit”) states that “the Secretary shall establish for each State a minimum number of claims for which work search records must be audited on a random basis in any given week.’’.


Section 1

Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, state and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


ETA intends to meet the requirement for a random audit by having states randomly select a cohort of claimants and auditing their work search activities. The universe subject to audit is every transaction between a state workforce agency (SWA) and an Unemployment Insurance (UI) claimant in which the claimant sought and received compensation. ETA notes that this is a clearly defined universe with absolutely no uncertainly in terms of size or characteristics. Example: If the state receives 1,800 requests for payment for a week of unemployment during a seven day period, and elects to provide payment for 1,650 of those claims, the universe for that seven day period would be 1,650.


The universe the sample will be drawn from is the total weeks paid to claimants by the state workforce agency in the Emergency Unemployment Compensation 2008 program. The size of the sample selected will be 0.5% of the total weeks compensated for a given 7 day period. Note that within a single seven-day period, a single claimant may have requested and received more than one EUC 08 payment, depending on when the claim is filed and the payment is made. Claimants who are audited are expected to have a 100% response rate because they are required by law to provide their work search records to the State agency when requested. Non-compliance may lead to loss of benefits. SWAs currently report aggregate totals for weeks paid on other reports (ref: OMB no. 1205-0010) so ETA has an independent check against any possible state error arising from clerical or programming issues that would not otherwise be detectable.


Section 2

Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


a) Sample Selection. The method to select a random sample of transactions to survey will employ a design in which the universe sampled is known without ambiguity and a subset of known size without replacement will be drawn from it. ETA plans to distribute a tool to states based on the Microsoft excel platform. States can place a personal identifier for each claimant into the spreadsheet, and select a sample size, and a macro will perform the sampling for them.

States would begin the process by computing the size of the sample they will be taking and the total number of paid weeks included in the sampling frame that they would be sampling from (i.e. the universe from which they are sampling). Note that the sample size is 0.5% of the universe (all paid weeks) and is censored on the low end at 50 and at the high end at 1500. Below is a table showing the estimated size of the samples using this sampling protocol. The table shows, for each state, the average number of weeks claimed (average from the weeks ending 3/10/2012 through 3/31/2012) reported by each state over the last four weeks. The average value is used to adjust for temporary fluctuations in level caused by seasonal and administrative changes. The table also shows the size of a 0.5% sample from the average claims level for those weeks. The resulting sample sizes are censored so as to not be larger than 1,500 and not be smaller than 50. The number of states subject to censoring is reported at the bottom, along with totals for the universe and collective state samples.







Once the sample size is established, the state will create an electronic file containing a record of all paid weeks processed over the prior 7 day period. This file will be sorted, first on size of the weekly payment, and second on the last four digits of the social security number of the claimant.

The process states will use to draw a sample will be to randomize the cases, randomly pick a starting record, and then skip through the randomized records until the necessary sample size has been achieved. The skip interval is computed by dividing the number of records in the sampling frame by the number of records to be sampled that week. The first sample case selected is determined by multiplying the skip interval by the random start number assigned in the input control record for that sample (EUC paid claims). The random start number is a six-place decimal with a value greater than zero and less than one. The product of the skip interval and the random start number is rounded to the nearest integer. If the rounded integer is zero, the case corresponding to the rounded skip interval is selected as the first case in the sample.

For example, assume the following:

  • Number of Records in the Sampling Frame (N) = 118

  • Random Start Number (r) = .260903.

  • Total Number of Cases to be Sampled (n) = 4.

  • Skip interval (k) = 118 / 4 = 29.5

  • Initial case selected (i) = .260903 x 29.5 = 7.697 = 8 (rounded)

Record 8 in the sampling frame is the first record selected for the sample. Subsequent cases are selected using systematic sampling. Example:

1. Select the initial sample case as described above.

2. Select the next (n-1) cases by adding multiples of the skip interval (k), rounded to the nearest integer, to the case number of the initial selection (i): i + round(jk), where j = 1,2,...,(n - 1).

In the example, cases 8, 38, 67, and 97 will be selected from the sampling frame of 118 records.

If the last case designated for selection by the sampling algorithm is greater than the size of the sampling frame (N), the case will be selected from the beginning of the sampling frame. That is, the sampling frame will be considered to be circular. For example, if the last case selected is N + 1, the 1st case in the sampling frame will be selected.

The general rule is:

if (i + round(jk)) > N, select case h, where h = [(i + round(jk)) - N] and 1 < h < i.

b) Estimation Procedures. There are no estimation procedures as there is no intent to weight the samples of individual cases up to a population level total and make a population level inference. In addition, there are no plans to look at the characteristics of individual claimants audited. Once the random subset of paid weeks has been identified from the universe of all transactions, states should be auditing the work search records for all claimants identified in the random subset. The results of those audits will be reported out, in aggregate, on a quarterly basis. Only the aggregate data will be reported to ETA by states.

c) Degree of Accuracy needed. Because no population level inference is being made, and no estimation is taking place, degree of accuracy is not a concern. The sampling is occurring from a universe of known size and without non-response bias, so results will accurately reflect the work search efforts of those audited.


d) Unusual problems requiring specialized sampling procedures. Random audit does not involve any unusual problems requiring specialized sampling procedures.


e) Use of periodic data collection to reduce burden.

Less frequent data collection cycles would not be an appropriate means for reducing burden. The UI system currently processes and pays on UI claims on a weekly basis, so an audit of that process is best handled on the same frequency. Were the frequency to be reduced, this could lead to longer delays in identifying people who had false or incomplete work search records and the possibility of larger overpayments.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.


Claimants who are audited are expected to have a 100% response rate because they are required by law to provide their work search records to the State agency when requested. Non-compliance may lead to loss of benefits. Claimants are informed of this requirement, and of the fact that proper worksearch activity and documentation, as specified in state law, are a requirement and condition to receive compensation for a week of unemployment.


 

C. Reliability of Data Collection

States currently report aggregate totals for weeks paid on other reports (1205-0010) so ETA has, in advance of a state’s submittal of data relating to audit activities, a good basis for knowing the size of the universe states are responsible for sampling as well as a reliable estimate of the size of the sample itself. It is anticipated that audits themselves will be subject to monitoring by agencies such as the USDOL Inspector General, GAO and others. USDOL will also perform monitoring as necessary to ensure that the audits are performed correctly, so the resulting data is an accurate reflection of state administrative activity.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.

No tests are planned.


5.1 Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

No individuals were consulted or contacted on the design. The random selection criteria proposed for this collection was drawn from a current protocol used successfully by the BAM program (1205-0245) that is employed by states for quality assurance processes. The aggregate data produced as a result of state audits of work search activities and investigations into claimant eligibility will be reviewed by ETA staff to ensure compliance with the congressional mandate.

5.2 Provide contact information for the agency unit, contractor(s), grantee(s), or other person(s) who will collect and/or analyze the information.

This data will be collected by Reports Team staff within the Office of Unemployment Insurance, Employment and Training Administration, U.S. Department of Labor. Questions, comments or concerns can be addressed to Scott Gibbons by email at the following address: gibbons.scott@dol.gov.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy