Request For OMB Clearance Of Data Collection For The Child Care Access Means Parents In School (CCAMPIS) Program
Part B
August 28, 2006
Submitted to:
U.S. Department of Education Policy and Program Studies Service 400 Maryland Ave. SW, Room 6W226 Washington, DC 20202
Project Officer:
|
Submitted by:
Mathematica Policy Research, Inc. 600 Maryland Ave. S.W., Suite 550 Washington, DC 20024-2512 Telephone: (202) 484-9220 Facsimile: (202) 863-1763
Project Director: |
CONTENTS
Section Page
B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS 16
1. Respondent Universe 16
2. Procedures For Sampling Methods And Analysis 18
3. Methods To Maximize Response Rates 27
4. Tests Of Procedures Or Methods 28
5. Individuals Consulted On And Responsible For Statistical Design 30
APPENDIX F: PRETEST MEMO F-1
APPENDIX G: Student List Worksheet and DATA LIST WORKSHEET G-1
APPENDIX H: REFERENCES h-1
TABLES
Table Page
B-1 NUMBER OF CCAMPIS grantS BY YEAR 17
B-2 NUMBER OF CCAMPIS GRANTEE INSTITUTIONS BY STATE OR Territory, FISCAL YEARS 2001 AND 2002 19
B-3 SAMPLE SIZES AND MDD BETWEEN CCAMPIS GRANTEE AND SIMILAR NONGRANTEE INSTITUTIONS 25
B-4 POPULATION PROPORTION IN CCAMPIS GROUP, MDD, AND POPULATION PROPORTION IN THE COMPARISON GROUP 27
FIGURES
Figure Page
Data for this study will be collected in two phases. We will first collect data from a small proportion of the sample to test procedures that were revised based on the pretest experience. After determining from the first phase whether the revisions (described below) are sufficient to facilitate the collection of key data, we will conduct the second phase of data collection with the remainder of the sample. Data from the first and second phases will be combined for analysis (we do not anticipate making any questionnaire changes between the two phases). This section describes the construction of the sampling frame, the sample selection procedures (which include matching CCAMPIS and non–CCAMPIS institutions and drawing a subsample of matched institutions to participate in the first phase of data collection), and the expected precision of the estimates.
CCAMPIS institutions are defined as Title IV postsecondary institutions that received CCAMPIS grants during the four cohorts of the program that were funded in fiscal year (FY) 1999, FY2001, FY2002, and FY2005. To be eligible for CCAMPIS grants, postsecondary institutions must have received at least $350,000 in Pell Grant funds in the previous fiscal year. A total 576 CCAMPIS grants have been awarded since 1999. Table B-1 provides the number of CCAMPIS grants by year.
The population of interest for the study will include two groups: (1) postsecondary institutions that received CCAMPIS grants in 2001 and 2002 and (2) CCAMPIS-eligible postsecondary institutions that did not receive such grants.
TABLE B-1
NUMBER OF CCAMPIS GRANTS BY YEAR
Yeara |
Count |
1999 |
85 |
2001 |
222 |
2002 |
122 |
2005 |
147 |
Source: Lists of CCAMPIS grantees provided by U.S. Department of Education.
aThere was no competition in 2000, 2003, and 2004.
CCAMPIS Institutions. The cohort of postsecondary institutions awarded CCAMPIS grants during FY2001 and FY2002 were identified as the population of interest for the following reasons:
The 2001 and 2002 grantees will have received up to four years of grant funding for their child care services and therefore will have had an opportunity to implement and refine their services and form perceptions of service effectiveness in promoting students’ persistence and degree completion.
When the survey is fielded at the beginning of the 2006–2007 school year, the 2005 grantees will have implemented only one year of CCAMPIS grant funding; the services implemented by these grantees may not reflect the full capacity of four years’ of CCAMPIS grants; and staff will not have had an opportunity to observe potential effects of the services.
The CCAMPIS program for the 1999 grantees differed from the CCAMPIS program in later rounds: the CCAMPIS program was not housed in the service area that had responsibility for administering the Federal TRIO Programs, the 1999 grantees were less likely to have had a child care program located on campus, and the amount of grant funds awarded to institutions was much smaller. For these reasons, the changes that the 1999 grantees were able to implement may not be typical of the services offered by later grantees.
For this study, the sample will include all 352 institutions in the sampling frame of CCAMPIS institutions for 2001 and 2002.1
Restricting the population to CCAMPIS grantees in FY2001 and FY2002 means that the study results will pertain to only the 352 institutions in those cohorts (see Table B-2 for the distribution of the CCAMPIS population by state). For this reason, the results will be generalizable only to CCAMPIS grantee institutions that have had a grant for four years.
Non–CCAMPIS Institutions. As noted, the population of non–CCAMPIS institutions is defined as CCAMPIS-eligible Title IV institutions that have never received CCAMPIS funding. The data source for constructing the study population is the IPEDS database. Information on the amount of an institution’s Pell Grant funds awarded to students for the preceding fiscal years (i.e., FY2000 and FY2001) is available from the IPEDS Finance Data component.
The study sample will include the universe of all postsecondary institutions that received CCAMPIS grants in FY2001 and FY2002. In addition, we will select a sample of eligible non–CCAMPIS institutions for use as comparison institutions. Unlike the case of a regular sample survey study in which the sample provides the basis for generalizing about a larger group/population, the present study will focus on the selection of non–CCAMPIS institutions in order to generate a set of comparison institutions that “match” the 352 CCAMPIS institutions described earlier. The matched comparison group of institutions ensures that analysis or comparison of CCAMPIS and non–CCAMPIS groups is not subject to selection bias2 (or at least minimizes any such bias).
TABLE B-2
NUMBER OF CCAMPIS GRANTEE
INSTITUTIONS BY STATE OR Territory,
FISCAL YEARS 2001 AND 2002
State |
Count |
Alabama |
7 |
Arizona |
7 |
Arkansas |
2 |
California |
49 |
Colorado |
8 |
District of Columbia |
1 |
Florida |
13 |
Georgia |
10 |
Idaho |
3 |
Illinois |
24 |
Indiana |
6 |
Iowa |
7 |
Kansas |
3 |
Kentucky |
7 |
Louisiana |
6 |
Maine |
4 |
Maryland |
4 |
Massachusetts |
3 |
Michigan |
10 |
Minnesota |
4 |
Mississippi |
3 |
Missouri |
7 |
Montana |
4 |
Nebraska |
4 |
Nevada |
1 |
New Jersey |
5 |
New Mexico |
2 |
New York |
17 |
North Carolina |
11 |
Ohio |
12 |
Oklahoma |
4 |
Oregon |
4 |
Palau |
1 |
Pennsylvania |
24 |
Puerto Rico |
4 |
South Carolina |
5 |
South Dakota |
2 |
Tennessee |
5 |
Texas |
23 |
Utah |
5 |
Virginia |
5 |
Washington |
14 |
West Virginia |
4 |
Wisconsin |
6 |
Wyoming |
2 |
Total |
352 |
Using Propensity Score Models to Identify the Comparison Group. From the population of eligible non–CCAMPIS institutions, we will use the Propensity Score Matching (PSM) method to select a sample of institutions that are comparable to or “match” the 352 CCAMPIS institutions. The PSM method will estimate propensity scores based on several observed characteristics on which the two groups (CCAMPIS and eligible non–CCAMPIS institutions) will be matched later. We will estimate propensity score models by using the logistic regression method, whereby the binary variable that indicates status as a CCAMPIS or eligible non–CCAMPIS comparison group member will be regressed on a set of predictors. For the PSM predictors, it will be important to include institutional, student, and community characteristics as well as state child care policies. Differences in these characteristics may affect the supply of child care services in the community and state and the demand for these services at postsecondary institutions. The IPEDS database will provide institution- and student-level characteristics; data requested from the Marketing Systems Group will be the key community-level matching variables at the telephone exchange level;3 and state child care policy data will be gathered from several other sources. These other sources include Child Care Bureau (U.S. Department of Health and Human Services) statistics available on the Web as well as state information compiled by Schulman and Blank (2005) and by the National Association of Child Care Resource and Referral Agencies. The following are examples of each of the four types of characteristics that will be considered as matching variables:
Institutional characteristics. Type (two- or four-year), control (public or private), and size of institution; whether the institution offers on-campus child care; and financial data, such as educational and general expenditures.
Student characteristics. Number of part- and full-time students; number of Pell Grant recipients (and their dependent status); and whether the campus is residential or commuter based on the number of students living on campus.
Community characteristics at the telephone exchange level. Percentage of population by age and race, percentage of households by income group, median household income, and percentage of college graduates.
Child care policy (at state level). Indicators of the availability of state child care assistance for low-income families (for example, percentage of eligible children receiving child care subsidies, percentage of subsidized children served by child care centers, and whether the state has a waiting list for child care assistance); whether child care subsidy eligibility covers education activities and under what circumstances (whether parents must also be working, hours of work required per week, and maximum number of years of education or highest level of degree allowed); and indicators of the cost of child care in the state (including average annual fees paid for full-time care for infants, preschool children, and school-age children and copayments for families receiving child care assistance). We also will consider the geographic location of the matched institutions (for example, states or regions). For this purpose, we may carry out matching within a region or include “region” as a predictor in the PSM model.
It is essential to note that first we will perform exploratory data analyses on the above variables to determine whether they are predictors of receipt of a CCAMPIS grant in the PSM model.
The goal of PSM is to identify one non–CCAMPIS institution similar to each CCAMPIS institution so that 352 similar non–CCAMPIS institutions will be available for comparison. In deciding which of the alternative methods of PSM should be used, we will determine the extent of overlap between the estimated propensity scores for the CCAMPIS institutions and those for the eligible comparison institutions. Matching will be based on an exact match of propensity scores (also known as “greedy matching”). In cases where no matched institutions can be found for grantee institutions, we will use a broader range of matching techniques, such as the caliper technique (Rosenbaum and Rubin 1985); for each grantee institution, this method selects all potential comparison institutions whose propensity scores fall within a specified range, or “caliper,” of the grantee’s propensity score. Another possibility is to use a subclassification technique (Rosenbaum and Rubin 1984) in which subclasses/cells based on propensity scores are constructed such that grantees and eligible nongrantee institutions within the same cell are considered matches for the comparison.
To assess the covariate balance between the CCAMPIS and non–CCAMPIS institutions before and after matching, we will compute descriptive statistics (using means or proportions4) separately for each covariate for both groups. MPR will then perform statistical tests that assess whether the two groups are different or similar in terms of the distribution of the covariates.
In addition, we will compute standardized differences5 to measure the covariate balance and will present the values in a graphic such as Figure B-1, which demonstrates that it is possible to assess covariate balance for individual covariates between the CCAMPIS and eligible non–CCAMPIS institutions. We will use an absolute standardized difference greater than 10 to 15 percent as a cut-off point to indicate imbalance.
Subsample for Phase I Data Collection. We will randomly select 36 of the matched pairs of CCAMPIS and non-CCAMPIS institutions from the full sample (about 10 percent of the sample-a total of 72 institutions) for the first phase of data collection (Section B4 provides justification for collecting data in two phases). The full sample will first be stratified by control of the institution (public, private nonprofit, private for-profit) and level of the institution (four or more years, at least two but less than four years), so that there are six sampling strata. Twelve pairs of matched CCAMPIS grantee and non-grantee institutions will then be selected randomly for the first phase sample within each of the stratum.
FIGURE B-1
STANDARDIZED DIFFERENCE (IN
PERCENT) BY IPEDS COVARIATE,
BEFORE AND AFTER MATCHING
Source: U.S. Department of
Education, National Center for Education Statistics, IPEDS,
data
for the 2001–2002 academic year and 2002–2003 academic
year.
Statistical Power and Expected Precision. The degree of accuracy of estimates is illustrated through a statistical power analysis under the assumption that the respondents are a random sample of the population. We performed a prospective power analysis based on a fixed sample size, confidence level, and power of the test in order to determine the level of precision of the resulting estimates and the magnitude of the CCAMPIS effect that is detectable.
We used the following assumptions in the power analysis:
The study is designed to detect effects with a confidence level of
90 percent (corresponding to type-I error
percent)
and power 80 percent.
To maintain a reasonable level of precision for statistical analyses, the sample design includes 352 institutions that received CCAMPIS grants in 2001 (228 institutions) or 2002 (124 institutions). We plan to select 352 comparison institutions that match the CCAMPIS institutions (one-to-one matches), as comparison with a balanced sample size will have more power than that with an imbalanced sample size.
Nonresponse will exist; an estimated 85 percent of the institutions will respond to the survey.
Nonrespondents may have different characteristics than respondents who complete the surveys. Therefore, analyzing the data based only on completed cases may introduce bias. To account for nonresponse and reduce the bias resulting from missing data, we must implement nonresponse compensation procedures and use analysis weights that account for survey nonparticipation. A design effect (DEFF) captures the variance inflation resulting from variation in weights from nonresponse adjustments. It is reasonable to assume a small design effect: DEFF = 1.10.
The characteristic being measured is quantified as a population proportion of 50 percent.
Table B-3 presents results of the power analysis and the resulting precision level based on the above assumptions.
It is important to note that even though no sampling is involved in selecting the CCAMPIS grantee group of institutions (all 352 grantees from 2001 and 2002 are included in the study), the table presents calculated standard errors based on the assumption that nonresponse exists.6 In Table B-3, we treat the respondents within each group as a random sample from the 352 institutions and consider the number of institutions responding to the survey as the sample size for computing the standard errors. Furthermore, nonresponse adjustments made through weighting will result in a DEFF larger than 1 owing to the unequal weights resulting from nonresponse adjustment.7
TABLE B-3
SAMPLE SIZES AND MDD BETWEEN CCAMPIS GRANTEE AND SIMILAR NONGRANTEE INSTITUTIONS
Sample |
Initial Sample Size |
Total Nonresponse Rate (percent) |
Target Number of Completes |
Approximate Design
|
Standard Error |
Coefficient of Variation (percent) |
Margin |
MDD at 80% Power and 90% Confidence for Comparisons |
Within-Group Descriptive Analyses |
|
|
|
|
|
|
|
|
Grantees |
352 |
85 |
299 |
1.1 |
3.03 |
6.06 |
5.00 |
|
Nongrantees |
352 |
85 |
299 |
1.1 |
3.03 |
6.06 |
5.00 |
|
75 percent subgroup of institutions |
|
85 |
244 |
1.1 |
3.50 |
7.00 |
5.78 |
|
50 percent subgroup of institutions |
|
85 |
150 |
1.1 |
4.29 |
8.57 |
7.07 |
|
Comparison Grantees versus Nongrantees |
|
|
|
|
|
|
|
|
Full sample of institutions |
|
85 |
299 |
1.1 |
|
|
|
10.62 |
75 percent subgroup of institutions |
|
85 |
244 |
1.1 |
|
|
|
12.24 |
50 percent subgroup of institutions |
|
85 |
150 |
1.1 |
|
|
|
14.94 |
Total Sample |
704 |
85% |
598 |
|
|
|
|
|
aA design effect of 1.1 is used to account for an increase in standard error due to the weighting adjustment for nonresponse. The sample size estimation used in the table was the overall or 100 percent, 75 percent, and 50 percent domains of population.
bMargin of error (i.e., the half-width of the 90 percent confidence interval) for a proportion (p) near 0.50 is based on the binomial distribution. The sampling variance is projected in accordance with the model Var(p)=p*(1-p)/n. The margin of error = 1.65*square root[Var(p)]. The MDD for a one-sided test of p1-p2 = 0 with alpha = 0.10 and power of 80% is MDD = SQRT{DEFF[Var(p1)/n + Var(p2)/n]}* ((z(alpha) + z(Beta)), where z(alpha) = 1.65 and z(beta) = 0.84.
Table B-3 shows that the minimum detectable difference (MDD) is a measure of the smallest difference between the CCAMPIS and non–CCAMPIS institutions that the study design is able to detect with 80 percent power and at a 90 percent confidence level. For example, an MDD equal to 11 percentage points means that, if 50 percent of the low-income student-parents in non–CCAMPIS grantee institutions use the on-campus child care program, then at least 61 percent of the low-income students in CCAMPIS grantee institutions would need to use the on-campus child care program in order for analysis to detect a statistically significant difference between CCAMPIS and non–CCAMPIS institutions, based on the 299 responding institutions (85 percent) in each group.
The computation of the prospective MDD in Table B-3 was based on an assumption that the proportion of students using child care equals 50 percent, which yields a conservative estimate of standard error and hence a conservative MDD. For characteristics with proportions other than 50 percent, the MDDs may be smaller. Table B-4 presents the magnitudes of MDDs for different combinations of CCAMPIS population proportions computed with a confidence level of 90 percent, 80 percent power, sample of 299 responders in both groups, and DEFF = 1 (assuming no variability in the weights).
Estimation and Variance Computation. The data in our analysis will be weighted to account for institution nonresponse. We will create a weight for each institution to be computed by using a standard weighting class method or a response propensity modeling method (Kalton and Maligalig 1991; Holt and Smith 1979; Oh and Scheuren 1983; Vartivarian and Little 2003).
Along with the weighted survey estimates, we will compute the standard errors of the estimates. Variance/standard error estimation will take into account the weighting adjustment process as well as the assumption that respondents are a random sample of the CCAMPIS/non–CCAMPIS population.
TABLE B-4
POPULATION PROPORTION IN CCAMPIS GROUP, MDD, AND POPULATION PROPORTION IN THE COMPARISON GROUP
p1 (CCAMPIS) |
MDD |
p2 (Comparison Group) |
50 |
10.17 |
60.17 |
55 |
10.12 |
65.12 |
60 |
9.96 |
69.96 |
65 |
9.70 |
74.70 |
70 |
9.32 |
79.32 |
75 |
8.81 |
83.81 |
80 |
8.13 |
88.13 |
85 |
7.26 |
92.26 |
90 |
6.10 |
96.10 |
Web-based data collection will help maximize response rates by allowing respondents to complete the survey at their convenience. Further, the survey’s integrated skips and automation features will allow respondents to move seamlessly from question to question without spending time reading and interpreting skip instructions as required on a standard mail survey. In addition, the Web-based survey will have a “save” option that permits respondents to start the survey and then complete it at a later time, minimizing the chance of mid-survey break-offs.
Not all respondents will have Internet access, and some with access may be uncomfortable responding to a Web-based survey. To maximize participation from these individuals and reduce nonresponse bias that may result from their nonparticipation, we will offer opportunities to complete a standard mail survey or telephone survey.
We will use standard techniques to reduce nonresponse by providing evidence of legitimacy in an advance letter, FAQs, and reminder prompts via emails, letters, and telephone calls as appropriate. We will also offer a project-specific MPR email address and toll-free telephone number so that participants with questions or concerns about participation may contact us.
Beyond the standard techniques described above, we will take additional steps to maximize response rates in this study. Since pretest respondents had difficulty completing certain sections of the survey, we created tools to facilitate respondents’ collection of these data items. We have also planned increased follow-up efforts to prompt study participants to complete the survey. These tools and follow-up efforts are described in detail in B4. Test Of Procedures Or Methods.
Despite our best efforts at minimizing nonresponse, some institutions (both CCAMPIS and non–CCAMPIS grantees) will inevitably fail to participate. We have planned a statistical approach to deal with nonresponse as described below. The adjustment process will implement a standard weighting class method or response propensity model method.
After MPR and ED thoroughly tested all aspects of the Web-based survey, MPR pretested the survey with one respondent at each of nine institutions (including both CCAMPIS grantees and nongrantees). MPR asked the pretest respondents to comment on the following: access to requested data across years, survey length, ability of a single point of contact to complete the survey, clarity of instructions, relevance of questions, question wording and applicability of response categories, missing items, general survey flow and layout, and ease of accessing and moving through the Web-based survey. Based on pretest respondents’ recommendations or on their survey responses, we implemented minor wording, item, and screen changes (see Appendix F. Pretest Report, pages 6-7).
The pretest also shed light on the difficulty respondents had in determining which students using child care services were Pell Grant recipients. (This difficulty applies to survey sections C, E, and F; child care program directors were able to complete the other on-campus sections.) In the actual data collection, child care program directors at non–CCAMPIS institutions may need assistance from another institutional office to identify the Pell Grant recipients. Child care program directors at CCAMPIS institutions, however, are likely to be able to identify such students themselves, as they had to do so for the performance reports that CCAMPIS grantees in the 2001 and 2002 cohorts submitted. (The CCAMPIS pretest respondents, on the other hand, were drawn from the 2005 cohort of grantees; that cohort had not submitted a performance report before the pretest was conducted and thus had not yet identified Pell Grant recipients.)
In general, pretest respondents who did not know which students were Pell Grant recipients did know the institutional office from which to request the data. They said they did not request assistance from the other office because staff in that other office would not respond within the couple weeks allotted for the pretest and because the summer (which was when the pretest was conducted) was a particularly bad time to submit the request. They contended that they would have made the request for the actual survey.
We will conduct the actual survey during the fall semester and will allow 10 weeks for data collection. We propose several revisions to encourage respondents to request data needed from another office. The revisions include:
Requesting respondents to submit a list of child care students to the appropriate office.
Adding instructions explicitly asking respondents first to prepare a list of the students using their child care services. (See the “Steps for Obtaining Data” letter in Appendix G.)
Providing a form (a version which can be completed electronically or printed and filled out manually) for the child care program directors to record names of students using the institution’s child care services in each academic year from 2001-2002 through 2006-2007. (See the Student List Worksheet in Appendix G.)
Requesting explicitly that the child care program directors send the list to the appropriate office to determine which students are Pell Grant recipients and to obtain persistence and graduation data for those students. (See the “Steps for Obtaining Data” letter in Appendix G.)
Directing respondents to complete the survey after they obtain this information. (See the “Steps for Obtaining Data” letter in Appendix G.)
Providing respondents with a list of data items that they may need to request from another institutional office (for example, they may need to request demographic data from a financial aid office or an office of institutional research). This form will facilitate the process for child care program directors to gather data from outside their office. Child care program directors would forward the two-page form to that office, along with their list of students. (See the Request for Data Assistance from Another Institutional Office and the Data List Worksheet in Appendix G.)
Prompting staff in the research (or other) office to cooperate. Based on informal conversations with research office staff at a few institutions, staff in those offices indicated a greater willingness to provide data than the child care program directors had predicted. Clearly, though, some research office staff will be less cooperative. Although our experience suggests the research office staff will be more responsive to requests from other institutional staff than from an independent research firm, more persistence in requesting the data may be needed than the child care program director is willing or able to provide. If follow-up telephone prompts to child care program directors indicate the respondents are waiting for another office to provide the data, MPR will offer to telephone that office to prompt them to provide the data.
The above procedures should increase item response rates for the questions on the Pell Grant recipients. However, we recommend conducting the survey this fall with about 10 percent of the sample. That will allow us to more thoroughly assess the effectiveness of these revisions before going full-scale with the remainder of the sample in the spring of 2007.
Amang Sukasih
Mathematica Policy Research, Inc.
Washington, DC
202.484.3286
Sameena Salvucci
Mathematica Policy Research, Inc.
Washington, DC
202.484.4215
Jill Constantine
Mathematica Policy Research, Inc.
Washington, DC
609.716.4391
1 Although Table B-1 reports 224 grants in the 2001 cohort and 122 grants in the 2002 cohort, the sample will contain 228 institutions from the 2001 cohort and 124 institutions from the 2002 cohort, since three CCAMPIS grants were awarded to community college districts that encompassed from three to five individual institutions. In this study, each community college district grantee will be represented by its individual institutions. We will match each institution covered by these grants (the matching process is described below) for the three community college districts, and each will be asked to complete a survey.
2 Selection bias refers to differences between the two groups (in this case, CCAMPIS and non–CCAMPIS institutions) due to unobserved covariates.
3 The telephone exchange level can be used to refer to geographic areas served by a particular telephone switch or, more narrowly, to the first three digits of the local number. Among characteristics available at the exchange level are percentage of population by race, percentage of population by age group, percentage of households by income group, median household income, median home value, median rent, median years of education, percentage of college graduates, percentage of owner-occupied households, percentage of renter-/other occupied households, total number of households, total population, Nielsen county size, and total number of listed households/banks.
4 For dichotomous or categorical variables, summary statistics can be computed as proportions. For continuous variables, summary statistics can be computed as means or medians.
5 The difference is defined as “statistic of CCAMPIS group” minus “statistic of non–CCAMPIS group.” The standardized difference is then computed by dividing the difference by the square root of its variance so that the value is scale-free and can be compared across variables. A positive value means that the statistic of the CCAMPIS group is greater than that of the non–CCAMPIS group. On the other hand, a negative value means that the statistic of the CCAMPIS group is smaller than that of the non–CCAMPIS group.
6 With the use of a census rather than a sample survey, no sampling/standard error is involved because no sampling takes place. In this case, an analysis usually compares outcomes directly across groups without performing statistical hypothesis testing.
7 A computation that assumed the design effect equals 1 (i.e., no weighting adjustment was made) resulted in an MDD of 10.13 percent based on a sample size of 299 respondents in both groups.
File Type | application/msword |
File Title | MEMORANDUM |
Author | August Parker |
Last Modified By | Wendy Mansfield |
File Modified | 2006-08-30 |
File Created | 2006-08-30 |