youthbuild evaluation participant follow-up survey, extension
omb control no.: 1205-0503
October 2015
The U.S. Department of Labor (DOL) has contracted with MDRC [and subcontractors Mathematica Policy Research (Mathematica) and Social Policy Research Associates (SPR)] to conduct a rigorous evaluation of the 2011 YouthBuild program funded by DOL, with initial support from the Corporation for National and Community Service (CNCS). The evaluation includes an implementation component, an impact component and a cost-effectiveness component. This data collection request seeks an extension of clearance for the 48-month participant follow-up survey, with minor revisions, that will be administered as part of the impact component of the evaluation. The research team enrolled approximately 3,930 youth into the study, with approximately 70 percent of the sample assigned to the treatment group that is eligible to receive YouthBuild services and 30 percent assigned to the control group which is not eligible to receive YouthBuild services. All study participants in 75 YouthBuild programs, selected from the broader group of grantees that received funding from either DOL or CNCS in FY 2011, are participating in the impact component of the evaluation. A series of follow-up surveys was cleared previously (see ICR Reference #201208-1205-007); however, data collection for the 48-month follow-up survey will continue beyond the data on which our approval expires. We are requesting an extension of that approval. A one-time sample of 3,436 youth was randomly selected from the full sample of 3,930 youth at 75 grantees for the fielding of the follow-up surveys. This sample of 3,436 youth was asked to participate in each of the three follow-up surveys.
This section begins by describing the procedures that were used to select sites for the impact component of the study and individuals for the follow-up surveys. The procedures for the study entail selecting sites for the evaluation, enrolling eligible and interested youth at these sites into the study, randomly assigning these youth to either a treatment or control group, and randomly selecting a subset of the full study sample within these sites for the fielding of the surveys. Next, we describe the procedures that were used to collect the survey data, paying particular attention to specific methods to ensure high response rates. This section closes by identifying the individuals who are providing the project team with expertise on the statistical methods that will be used to conduct the empirical analyses.
The sample for the follow-up survey includes 3,436 study participants across 75 sites. Youth who consented to participate in the study were randomly assigned to one of two groups: The treatment or program group, whose members were invited to participate in YouthBuild as it currently exists; or the control group, whose members cannot participate in YouthBuild, but were able to access on their own other services in the community.1
In May 2011, DOL awarded grants to 74 programs. After dropping three programs from the selection universe (representing less than five percent of expected youth served), the evaluation team randomly selected 60 programs for participation in the impact component of the evaluation, using probability-proportional-to-size sampling. (Two programs were subsequently dropped from the sample, after assessing that they were not suitable for the study.) In other words, each program had a probability of selection that was equal to its expected enrollment over a given program year. This method gives each YouthBuild slot (or youth served) an equal chance of being selected for the evaluation, meaning that the resulting sample of youth who enroll in the study should be representative of youth served by these programs. Once a program was selected for the evaluation, all youth who enroll at that program between August 2011 and January 2013 were included in the research sample. In deciding on the total number of DOL-funded programs to include in the impact component of the evaluation, we attempted to balance the objectives of 1) maximizing the representativeness of the sample and the statistical power of the impact analysis and 2) ensuring high quality implementation of program enrollment and random assignment procedures. On the one hand, maximizing the representativeness of the sample and the sample size would call for including all grantees in the study. However, substantial resources are required to work with a study site to: a) secure staff buy-in and cooperation with the study; b) train staff on intake and random assignment procedures; and c) monitor random assignment. Given a fixed budget, the team determined that 60 DOL-funded programs should be selected for the evaluation. The quality of the enrollment process (and, potentially, the integrity of random assignment) would suffer if more than 60 sites were included. Given the expected number of youth served per program, 60 programs was deemed adequate to generate a sample size that would provide reasonable minimum detectable effects on key outcomes of interest.
There are 40 programs that comprise the universe of programs that received CNCS but not DOL funding in 2011, and 23 of these programs received grants of $95,000 or more from CNCS. After further review of plans for enrollment in the subsequent year, 17 of these programs were selected for the impact component of the evaluation.
Recruitment and enrollment into the study began in August 2011 and was completed in January 2013. Programs recruited and screened youth using their regular procedures. However, the recruitment materials did not guarantee youth a spot in the program. Program staff showed applicants a video prepared by the evaluation team that explained the study. The video and site staff explained that applicants were required to participate in the study in order to have a chance to enroll in YouthBuild. Staff was provided with information to answer any questions youth may have asked after viewing the video presentation, before providing their consent to participate.
After youth agreed to participate in the study, as discussed below, baseline data were collected from all youth before random assignment (see OMB Control Number #1205-0464).
Youth assigned to the program group were invited to start the program, which may have included a pre-orientation period that youth must successfully complete in order to enroll in the formal YouthBuild program. Youth assigned to the control group were informed of their status and given a list of alternative services in the community.
In these 75 sites, we enrolled 3,930 youth into the study, with approximately 70 percent of applicants assigned to the treatment group and 30 percent of applicants assigned to the control group. From the full sample of youth, we randomly selected a subset of 3,436 youth as the survey sample.
Program impacts on a range of outcomes will be estimated using a basic impact model:
Yi = α + βPi + δXi + εi
where: Yj = the outcome measure for sample member i; Pi = one for program group members and zero for control group members; Xi = a set of background characteristics for sample member i; εi = a random error term for sample member i; β = the estimate of the impact of the program on the average value of the outcome; α = the intercept of the regression; and δ = the set of regression coefficients for the background characteristics.
Since the DOL-funded programs were selected using probability-proportional-to-size sampling, each youth served by the program was given an equal chance of being selected for the study. For this reason, the (unweighted) findings will provide an estimate of the effect of the program on the average youth served by 2011 DOL-funded sites. Similarly, findings from the CNCS only-funded sites will be representative of the effects on youth served by sites that received at least $95,000 in CNCS funding in 2011 but not DOL funding. In addition, findings from all 75 sites will be representative of the larger universe from which they were drawn. However, we will not attempt to generalize the findings to youth served by programs beyond our sampling universe, that is, all YouthBuild programs.
Table B.1 presents the Minimum Detectable Effects (MDEs) for several key outcomes, or the size of program impacts that are likely to be observed or detected for a set of outcomes and a given sample size. MDEs are shown for the survey sample of 2,749, assuming an 80 percent response rate from a fielded sample of 3,436.
The MDE for having a high school diploma or GED is 4.6 percentage points. The MDE for a subgroup comparison, assuming the subgroup comprises half of the sample, is 6.5 percentage points. These effects are policy relevant and well below the effects on educational attainment found from several other evaluations of programs for youth. For example, impacts on GED or high school diploma attainment were 15 percentage points and 25 percentage points in the Job Corps and ChalleNGe programs, respectively.
MDEs for short-term employment rates are shown in the second column. These MDEs are similarly 4.6 percentage points for a full sample comparison and 6.5 percentage points for a subgroup comparison. Several evaluations of programs for disadvantaged youth have found substantial increases in short-term employment rates, although these gains tended to diminish in size over time. The Job Corps evaluation, for example, found impacts on annual employment rates of 10-to-16 percentage points early in the follow-up period using Social Security Administration records. However, Job Corps’ effects on employment when measured with survey data were substantially smaller, ranging from 3-to-5 percentage points per quarter. Other effects on employment include an impact of 4.9 percentage points after 21 months in ChalleNGe and an impact of 9.3 percentage points after 30 months for women in the high fidelity CET replication sites.
Table B.1. Minimum Detectable Effects for Key Survey Outcomes
|
Has
high school |
Employed since random assignment |
Earnings |
Survey |
|
|
|
Full sample (2,749) |
0.049 |
0.049 |
$1,080 |
Key subgroup (1,360) |
0.070 |
0.070 |
$1,535 |
Notes: Assumes an 80 percent response rate to the follow-up survey and a random assignment ratio of 70 program to 30 control.
Average rates for high school diploma/GED receipt (45 percent) and employment (52 percent) are based on data from the YouthBuild Youth Offender Grantees. For annual earnings, the average ($11,000) and standard deviation ($11,000) are based on data for youth with a high school diploma or GED from the CET replication study. Calculations assume that the R-squared for each impact equation is .10.
MDEs for earnings are shown in the final column. With the survey sample size, we would be able to detect as statistically significant an impact of at least $1,080 on annual earnings during a given follow-up year. Assuming a control group average of $11,000 in annual earnings, for example, this impact would represent a 10 percent increase in earnings. MDEs for a subgroup comparison are larger, at $1,535, or a 14 percent increase. These effects are large, even relative to the significant effects generated by the Job Corps program. Career Academies, however, did lead to substantially larger earnings gains than these, particularly for the men in the sample.
The follow-up surveys are the primary source of outcome data to assess program impacts. These data will inform DOL and CNCS on how YouthBuild programs affect program participants’ educational attainment, postsecondary planning, employment, earnings, delinquency and involvement with the criminal justice system, and youth social and emotional development.
For the YouthBuild evaluation, we use a multi-mode survey that begins on the web, and then moves to more intensive methods – Computer Assisted Telephone Interviewing (CATI) and in-person locating – as part of our non-response follow-up strategy. To encourage youth to complete the survey early and online, we conducted an experiment during the 12-month follow-up survey that informed our data collection strategy for the subsequent rounds (see ICR Reference #201202-1205-002). Survey sample members were randomly assigned to one of two incentive conditions: 1) “the early bird” and 2) the incentive control condition. The “early bird” group received a higher incentive of $40 for completing their survey online within the first four weeks of the field period, or $25 for completing the survey thereafter regardless of mode of completion. The control group (which consisted of members of the YouthBuild evaluation’s treatment and control group) received $25 regardless of when they complete the survey. We found that the $40 incentive condition is associated with greater odds of completing early, reduced costs due to fewer cases being sent to the phones and the field, and potentially greater representativeness among respondents (although these findings were not statistically significant). Specifically, those who were offered the $40 incentive had 38 percent higher odds of completing their survey within the first four weeks, compared to those who were offered the $25 incentive. This finding remained significant after controlling for a host of demographic characteristics associated with non-response such as gender, age, and race (OR= 1.38 p<.01). As a result, we continue to offer the “early bird special” during the 30-month follow up survey data collection. We shared the findings, along with other findings from the 12-month follow-up survey, with DOL shortly after completion of the survey (Appendix J). For those sample members who do not complete the survey on the web, evaluation team members attempt to complete the survey over the phone using a CATI survey. For those sample members who cannot be located by telephone, evaluation team members use custom database searches and telephone contacts to locate sample members. Field locators use the last known address of residency and field locating techniques to attempt to find sample members who cannot be found using electronic locating methods. Field locators are equipped with cell phones and are able to dial into Mathematica’s Survey Operations Center before giving the phone to the sample group member to complete the CATI survey. We expect that the 48-month follow-up survey will take approximately 35 minutes to complete.
Earlier Paperwork Reduction Act clearance packages requested Office of Management and Budget (OMB) clearance for a site selection questionnaire, the baseline forms and participant service data, the grantee survey, site visits protocols, the cost of data collection, and the follow-up surveys administered to study participants. This OMB package pertains only to an extension of the existing data collection, which was approved on December 18, 2012 (see ICR Reference #201208-1205-007). While both the 12- and 30-month follow-up surveys will be completed under the existing approval, some of the 48-month data collection will extend beyond the end of 2015. Minor modifications were made to the 48-month instrument. Those changes were tested and a summary of pretest findings can be found in Appendix G1 and Appendix G2.
We expect to achieve an 80 percent response rate for the 48-month follow-up survey. Table B.2 shows the actual and expected response rates by mode for each follow-up survey. The 12-month higher incentive payment did have a significant effect on response rates and costs, as the majority of our survey completes were done on the web and over the telephone. Since the “early bird” incentive proved effective, we expect to see a larger proportion of cases completed on the web over time. This table is updated as part of our report on the findings from 12-month follow-up survey’s incentive experiment (Appendix J). We obtained an 81 percent response rate for the 12-month follow-up survey. In the current 30-month round of follow-up, within the oldest nine cohorts (n = 1,113), we have an overall completion rate of 80.0 percent as of April 22, 2015. The completion rate for the treatment cases in those cohorts is 80.7 percent, while the completion rate for the control group is 78.8 percent.
Table B.2. Actual and Anticipated Response Rates by Mode of Administration
|
Completion Rates (Percents) |
Nonresponse |
|||||
Survey Round |
Web |
Telephone |
Field/Cell Phone |
Total Completion Rate |
Refusals |
Unlocatable |
Total Nonresponse |
12-Month Survey (actual) |
27.9 |
36.7 |
16.4 |
81.02 |
5.97 |
13.01 |
19.98 |
30-Month Survey (anticipated) |
25 |
30 |
25 |
80 |
5 |
15 |
20 |
48- Month Survey (anticipated) |
25 |
25 |
30 |
80 |
5 |
15 |
20 |
Note: While our experience to date with the 30-month follow-up survey suggests slightly higher than anticipated completion rates on the web, our performance is based on a small number of cohorts that have been finalized. The majority of cases are still being worked in the field. In the 48-month survey, we expect locating to be the main source of non-response, which will result in a higher percentage of cases being sent to and completed in the field.
The YouthBuild data collection plan incorporates strategies that have proven successful in our other random assignment evaluations of disadvantaged youth. The key components of the plan include:
Selecting data collection modes that maximize data quality and are appropriate for the population under study;
Designing questionnaires that are easily understood by people with low-literacy levels and/or limited English-language proficiency;
Implementing innovative outreach and locating strategies through social media outlets such as Facebook and Twitter; and
Achieving high response rates efficiently and cost effectively by identifying and overcoming possible barriers to participation.2
As mentioned earlier, we use a multi-mode approach that begins on the web and then moves to CATI and in-person locating. The advantage of this multi-mode approach is that it allows participation by people who do not have listed telephone numbers, telephone land lines, or access to a computer or the internet; it also accommodates those who have a preference for self-administration. The use of field locators instead of in-person interviewing minimizes possible mode effects by limiting the number of modes to two: (1) self-administered via the web, and (2) interviewer-administered over the telephone. We gave careful consideration to offering a mail option; however, youth’s high mobility rates and increasing use of the internet suggest that a mail option would not be cost-effective. 3,4 We provide a hardcopy questionnaire to any sample member who requests one but, increasingly, studies of youth find a preference for completing surveys online.5,6,7
The questionnaire itself is designed to be accessible at the sixth-grade reading level. This ensures that the questionnaire is appropriate for respondents with low literacy levels and/or limited English proficiency.
Throughout the data collection process, we implement a variety of outreach strategies to encourage respondent participation and facilitate locating. A key component of this outreach strategy is the development of a YouthBuild Evaluation presence on social media outlets, such as Facebook and Twitter. Because youth are more likely to have a stable electronic presence than physical address, we use social media platforms to communicate information about the study, including the timing of follow-up surveys, and assist in locating efforts as necessary. For those participants for whom we have a mailing address – regardless of whether we can also message them through social medial – we send a participant advance letter (Appendix B) to introduce each round of data collection, and sent a round of interim letters (Appendix C).
Mathematica begins making phone calls approximately four weeks after the launch of the web survey. The web survey remains active during this time. Mathematica’s sample management system allows for concurrent multi-mode administrations and prevents surveys from being completed in more than one mode. Bilingual interviewers conduct interviews in languages other than English.
Sample members are assigned to field locators if they do not have a published telephone number, cannot be contacted after multiple attempts, or are reluctant to complete the survey. Field locators are informed of previous locating efforts, including efforts conducted through social media outlets, as well as interviewing attempts. When field locators find a sample member, they will gain cooperation, offer the sample member the use of a cell phone to call into Mathematica’s Survey Operations Center and wait to ensure the completion of the interview.
We use three key steps to ensure the quality of the survey data: 1) a systematic approach to developing the instrument; 2) in-depth interviewer training; and 3) data reviews, interviewer monitoring and field validations. The follow-up surveys include questions and scales with known psychometric properties used in other national studies with similar populations. We use Computer Assisted Interviewing (CAI) to: manage respondent’s interaction across modes so that respondents cannot complete the questionnaire on both the web and through CATI; ensure that respondents can exit the survey in either mode and return later to the last unanswered question, even if they decide to complete in a different mode; integrate the information from both of the data collection modes into a single database; and build in data validation checks within the survey to maintain consistency.
We conduct in-depth interview training with select telephone interviewers and locators who have the best track records on prior surveys in gaining cooperation and collecting data from youth or vulnerable populations. The training includes identifying reasons for non-response and strategies for gaining cooperation among those who have decided not to participate by web. We train interviewers to address more common reasons youth do not respond to surveys by using role-playing scenarios. A key component of our training focuses on gaining cooperation from control group members, who may feel less inclined to participate in a YouthBuild-related study because they were not invited to participate in the program itself.
Data reviews are conducted throughout the field period to assess the quality of the data. Mathematica uses several standard techniques to monitor the quality of data captured and the performance of both web and CATI instruments. We review frequencies and cross tabulations for inconsistencies a few days after the start of data collection, as well as throughout the data collection period. It is standard Mathematica practice to monitor ten percent of the hours each interviewer spends contacting sample members and administering interviewers ensuring that interviewer performance and survey administration are carefully monitored throughout data collection. Interviewers are trained to enter comments when respondents provide responses that do not match any of the response categories or when respondents insist on providing out-of-range responses. Ten percent of the field-generated completes are validated through a combination of mail and telephone validation techniques that ensure that the proper sample member was reached, the locator behaved professionally, and the date of the interview was reported correctly.
The follow-up questionnaire was pretested, as discussed in section 4 below.
Finally, we will conduct a response analysis for the survey prior to using the data to estimate program impacts. Though we anticipate reaching an 80% response rate using the methods noted above, we use paradata to monitor completion rates by subgroups based on several observable characteristics, such as treatment status and gender. We can use this information to target our resources to minimize potential nonresponse among subgroups as needed. Following the conclusion of data collection, we will: 1) compare respondents to non-respondents on a range of background characteristics; 2) compare program group respondents with control group respondents on a range of background characteristics; and 3) compare impacts on the selected outcomes that are available using administrative records data, such as quarterly employment and earnings, for respondents versus non-respondents. If these analyses suggest that the findings from the respondent sample cannot be generalized to the full sample, we will consider weighting (using the inverse predicted probability of response) or multiple imputation. However, the combined adjustment methods are not a complete fix, since both assume that respondents and non-respondents are similar on unobservable characteristics. For this reason, results from this adjustment will be presented with the appropriate caveats.
Pretesting surveys is vital to the integrity of data collection. We reviewed previously used questions and developed new questions for the evaluation according to the following guidelines:
Questions will be worded simply, clearly, and briefly, as well as in an unbiased manner, so that respondents can readily understand key terms and concepts.
Questions will request information that can reasonably be expected of youth.
Question response categories will be appropriate, mutually exclusive, and reasonably exhaustive, given the intent of the questions.
Questions will be accompanied by clear, concise instructions and probes so that respondents will know exactly what is expected of them.
For the YouthBuild population, all questions will be easily understood by someone with a sixth-grade reading level.
Prior to formal pretesting, the evaluation team completed internal testing of the instrument to estimate the amount of time it will take respondents to complete each follow-up survey. We found that the survey took approximately 35 minutes when averaged across four internal tests.
During formal pretesting, we pretested the 48-month follow-up survey with nine respondents some of whom were participating in YouthBuild programs that are not in the study and others who were similar to the YouthBuild target population but not participating in the program. Cognitive interviews administered as either think-aloud protocols or retrospective protocols were conducted as part of our pretest efforts. We pretested in each survey mode by having five participants complete a self-administered version of the questionnaire and four complete an interviewer-administered version. Each interview included a respondent debriefing, administered by senior project staff that assessed respondent comprehension, clarity of instruction, question flow, and skip logic. We also conducted debriefings with interviewers to assess the survey timing and ease of administration. Detailed findings from the cognitive interview pretest are included in Appendix G.
In preparation for the 48-month survey for the YouthBuild Evaluation, Mathematica Policy Research conducted a pretest on a series of questions that aim to capture attributes generally associated with one’s character. The three main objectives of this round of pretest were to (1) assess the consistency and accuracy with which participants interpreted the revised personal attributes, (2) see if an open-ended version of the question was a viable alternative to the close-ended question, and (3) assess any differences in results when testing the series with non-YouthBuild participants. To address concerns that the original pretests did not capture responses typical of non-YouthBuild participants, we conducted this round of pretesting with students from two adult literacy programs in Chicago. These participants were in the same age span as respondents in our survey and self-identified as young adults. We had six female and three male participants. Based on the findings from the testing, we revised the questions in several ways. The final version is included in the 48-month survey. Detailed findings from this pretest is included in Appendix G2.
In addition, we also pretested three new series of questions that that we propose should be included in the 48-Month survey for the YouthBuild Evaluation. The first series aims to capture attributes generally associated with one’s character. The second two series of questions measure respondents’ activities surrounding civic engagement, leadership, and their sense of responsibility to others such as their family or people in their community.
The pretest was conducted in two rounds utilizing a mixed-method approach. All pretest interviews were conducted in person. The first round included a card-sort activity with debrief for the personal attributes question and cognitive interviews for the activities questions. For the second round, youth completed either interviewer-administered or self-administered questionnaires and then participated in a cognitive interview using a retrospective protocol for the entire series of questions.
There were four main objectives of the pretest: (1) to assess the consistency and accuracy with which respondents interpreted abstract concepts such as personal attributes; (2) to identify questions that seemed redundant with items already included on the survey; (3) to assess the best placement for the new questions in the overall survey; and (4) to assess the additional burden the new questions would place on respondents. Pretests were conducted in-person with a total of 18 former YouthBuild participants from YouthBuild Philadelphia and Great Falls YouthBuild in Patterson, NJ. Participants were given a $25 gift card for their participation after completion of the interviews. Trained Mathematica staff conducted all pretest interviews. The rationale for the inclusion of these new questions are included in Appendix K and their pretest findings are included in Appendix G1.
There were no consultations on the statistical aspects of the design for the youth follow-up surveys.
Contact information for key individuals responsible for collecting and analyzing the data:
Lisa Schwartz, Task Leader for the Surveys
Mathematic Policy Research
P.O. Box 2393
Princeton, NJ 08543
lschwartz@mathematica-mpr.com
609-945-3386
Cynthia Miller, Project Director and Task Leader for the Impact Component
MDRC
16 East 34th Street
New York, NY 10016
Cynthia.miller@mdrc.org
212-532-3200
1 The embargo period for this study is two years from the random assignment date. After the embargo period, control group members can receive YouthBuild services if deemed eligible by the program.
2 These procedures have proven highly effective in other Mathematica studies. For example, in an evaluation of Early Head Start Programs that serve low-income pregnant woman and children aged 0-3, we achieved an 89 percent response rate using web surveys, with telephone and mail follow-up over a five-month period. Mathematica also recently conducted the College Student Attrition Project (CSAP) for the Mellon Foundation on which over 80 percent of the completed surveys were done online. While we do not anticipate as high a web-completion rate for the YouthBuild evaluation – the YouthBuild sample is more disadvantaged than the CSAP population – our approach emphasizes communicating with youth through the media that they prefer. In addition, we are offering the survey in multiple modes so while our web completion rate is expected to be lower than what we achieved on CSAP, our combined modes should result in an 80 percent response rate.
3 The Census Bureau estimates that about 14 percent of the overall population moves within a one-year period. However, the highest mobility rates are found among young adults in their twenties. Thirty percent of young adults between the ages of 20-24 and twenty-eight percent of those between the ages of 25-29 moved between 2004 and 2005, the most recent years for which these data are available (.http://www.census.gov/population/www/pop-profile/files/dynamic/Mobility.pdf). For the YouthBuild evaluation, this means that approximately one-third of our sample is expected to move between the time of random assignment and the first follow-up survey; and a higher percentage is likely to relocate prior to the third follow-up survey.
4 Lenhart, Amanda and Kristen Purcell, Aaron Smith and Kathryn Zickuhr. “Social Media and Young Adults.” February 2010. http://www.pewinternet.org/Reports/2010/Social-Media-and-Young-Adults.aspx.
5 Lygidakis, Charilaos, Sara Rigon, Silvio Cambiaso, Elena Bottoli, Federica Cuozzo, Silvia Bonetti, Cristina Della Bella, and Carla Marzo. “A Web-Based Versus Paper Questionnaire on Alcohol and Tobacco in Adolescents.” Telemedicine Journal & E-Health, vol. 16, no. 9, 2010, pp. 925-930.
6 McCabe, Sean E. “Comparison of Web and Mail Surveys in Collecting Illicit Drug Use Data: A Randomized Experiment.” Journal of Drug Education, vol. 34, no. 1, 2004, pp. 61-72.
7 McCabe, Sean E., Carol J. Boyd, Mick P. Couper, Scott Crawford, and Hannah D'Arcy. “Mode Effects for Collecting Alcohol and Other Drug Use Data: Web and U.S. Mail.” Journal of Studies on Alcohol, vol. 63, no. 6, 2002, pp. 755.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | CCastro |
File Modified | 0000-00-00 |
File Created | 2021-01-25 |