Supplemental Form. NASA Education Undergraduate Internship Impact Surveys- Baseline Instruments 1

NASA Education Undergraduate Internship Impact Surveys-Baseline Instrument I.pdf

Generic Clearance for the NASA Office of Education Performance Measurement and Evaluation (Testing)

Supplemental Form. NASA Education Undergraduate Internship Impact Surveys- Baseline Instruments 1

OMB: 2700-0159

Document [pdf]
Download: pdf | pdf
REQUEST FOR APPROVAL under the Generic Clearance for NASA Education Performance Measurement
and Evaluation, OMB Control Number 2700-0159, expiration 04/20/2018
_____________________________________________________________________________________
I.

TITLE OF INFORMATION COLLECTION: NASA Office of Education Undergraduate Internship Impact SurveysBaseline Instruments I

II.

TYPE OF COLLECTION:
 Focus Group Protocol
 Usability Protocol
 Cognitive Interview Protocol
 Attitude & Behavior Scale
 Satisfaction Survey
 Baseline Survey
 Follow-up Survey

III. GENERAL OVERVIEW: The NASA Internship, Fellowship, and Scholarship (NIFS) line of business (LOB)
leverages NASA’s unique missions and programs to enhance and increase the capability, diversity, and size of
NASA’s and the Nation’s future STEM workforce. In so doing, NASA Education manages its undergraduate
internships through the NIFS LOB. NASA Internships are defined as competitive awards to support educational
work opportunities that provide unique NASA-related experiences for educators and high school,
undergraduate, and graduate students. Note, however, that the focus of this information collection is
undergraduate internships. These internships engage students with real-world experiences while contributing
to the operation of a NASA facility or the advancement of NASA’s missions. The internship process is supported
by the One Stop Shopping Initiative (OSSI), which provides a NASA-wide integrated application, selection, and
data collection/reporting system that is centrally located at https://intern.nasa.gov.
III.

INTRODUCTION AND PURPOSE: Internships are distinguished from other experiential learning opportunities
by a focus on mentor-directed, degree-related, work-place task completion within an authentic learning
environment (Linn, 2004; Herrington & Herrington, 2005). Our interest is in understanding why, how, and in what
ways students are impacted in the short-, intermediate, and long-term by participation in NIFS internship
experiences. Thus, the purpose for pilot testing is to develop valid instruments that reliably explain the ways in
which participants’ attitudes and behaviors are impacted by the experiential learning opportunity of the
internship. Guided by the most current STEM education, research, and measurement methodologies, it is the
goal of this rigorous instrument development and testing procedure to provide information that becomes part
of the iterative assessment and feedback process for this line of business. This information collection includes
instruments designed to assess intended outcomes associated with participation in a NASA internship
experience. Of the myriad undergraduate STEM-related educational outcomes of interest to NASA Education
(Crede & Borrego, 2013; Duckworth, Peterson, Matthews, & Kelly, 2007), this first pilot cycle includes two
descriptive surveys to collect predictor variable data and two surveys to assess psycho-social factors
hypothesized in the research literature as relevant to success in STEM disciplines (Xie, Fang, & Shauman, 2015).
General descriptions of the instruments are as follows:
o General expectations for the NASA internship experience
o Preparedness to undertake research in a laboratory and/or field setting (e.g., Gilmore, Vieyra,
Timmerman, Feldon, & Maher, 2015)
o Development related to students’ intention to complete their degrees and satisfaction with their
programs (Crede & Borrego, 2013)
o Grit or perseverance towards achieving long-term goals (Duckworth, Peterson, Matthews, & Kelly,
2007)

Wills

NASA Office of Education

1

Hence, the goals of this cycle of pilot testing are as follows:
o Determine preliminary psychometric properties (e.g., validity, reliability) of the instruments, to
explore individual item functioning, and to make any necessary adjustments in preparation for
large-scale testing as the basis for more sophisticated statistical testing.
o Determine which of two testing designs-- traditional pre-test-post-test or the retrospective pretest method-- obtains the most accurate responses with the highest response rate while
minimizing burden on respondents.
o Determine an accurate response burden for these instruments.
IV.

RESEARCH DESIGN OVERVIEW: NASA Education is using a one-group pretest-posttest quasi-experimental
design, with the addition of a retrospective pretest for one attitude and behavior survey to test the possibility of
reducing burden imposed upon the public. Because NASA Education anticipates frequent use of attitude and
behavior, and knowledge surveys, the phenomenon of response shift bias is of particular concern. For this reason,
one retrospective survey has been inserted into this pilot testing rotation. Despite the absence of a control group,
this design can still yield strong causal effects when effort is made to satisfy requirements of quasiexperimentation such as identifying and reducing the plausibility of alternative explanations for the internshipas- treatment effect (Shadish, Cook, & Campbell, 2002), identifying conceivable threats to internal validity, and
statistically probing likelihood of treatment-outcome covariation (Mark & Reichardt, 2009).
Empirical research (e.g., Howard, 1980; Drennan & Hyde, 2008; Nimon, 2014) suggests that a retrospective
pretest (then-test) may provide a more accurate pre-intervention measure than a traditional pretest if it happens
that respondents change their perceptions of their initial level of functioning as a consequence of the
intervention. In other words, respondents change their internal standards of measurement having gained in
experience or familiarity with the self-rating dimension(s) (Nimon, 2014). According to Norman (2003),
“[r]esponse shift theory presumes that [participants’] prior state is adjusted in retrospective judgment on the
basis of new information acquired in the interim, so that the retrospective judgment is more valid” (p. 243). The
statistical manifestation of rating oneself on a different dimension or metric at post-test results in a mismatch
between pre- and post-test scores known as response shift bias (Goedhart & Hoogstraten, 1992). The
retrospective pretest is considered to be a valid assessment tool when respondents cannot be expected to know
what they do not know at the onset of an intervention (Pelfrey and Pelfrey, 2009). Such may be the case with
respondents who are participating in a NASA opportunity and/or are completing an attitude and behavior or
knowledge survey for the very first time. Response shift bias is identified through administration of a traditional
pretest, posttest, and then-test wherein some respondents are administered the traditional pretest and posttest
set and other respondents are administered the then-test.
Following this pilot phase of testing and subsequent determination of instrument psychometric properties,
indeed NASA Education has tentative research questions and hypotheses to test regarding the impact of
internship experiences on NASA internship awardees. Thus, this work is integral to the iterative assessment and
feedback process for the NASA Internships, Fellowships, and Scholarships line of business

V.

TIMELINE: Testing of surveys will take place approximately January 18, 2016 through August 1, 2016. These
dates coincide with the 2016 Spring and Summer internship sessions. Trends for internship data between 2011
and 2015 show a 20% annual increasing trend in internship placements and an average of 1,074 internship
placements across spring and summer sessions for those years. In that light, we are confident that within this
time frame we will acquire the statistically relevant number of responses to the pilot test pre- and post-internship
surveys by using particular strategies for increasing response rates to counter historically low response rates and
challenges presented by attrition (Barclay, Todd, Finlay, Grande, & Wyatt, 2002).

VI.

SAMPLING STRATEGY: NASA Education employed an estimation procedure to determine the statistically
adjusted number of respondents for the final sample size that meets the minimum criteria for number of
respondents (N ≥ 200) necessary to determining preliminary item characteristics (Komrey & Bacon, 1992;

Wills

NASA Office of Education

2

Reckase, 2000). This estimation procedure accounts for the potential respondent universe, estimated variance
in respondent universe, precision desired, confidence level, and the prior observed response rate for the
category of respondents (Watson, 2001). Watson’s sample size formula as applied to Spring and Summer 2015
data in Table 1 demonstrates the number of respondents this pilot effort should reach in order to collect the
base sample size of respondents (2001). In brief, this formula suggests that this pilot effort oversample by 200
respondents. NASA Education will randomly sample from the OSSI data base of internship placements at the
conclusion of the selection process, which is completed by December 31, 2015.
Table 1. Calculation chart to determine statistically relevant number of respondents
Data
Collection
Source

OSSI
Internship
Placements
VII.

(N)
Population
Estimate
for FY
2015 Q4

(A)
Sampling
Error +/5% (.05)

(Z)
Confidence
level 95%/
Alpha 0.05

(P) *Variability
(based on
consistency of
intervention
administration) 50%

Base
sample
size

Response
Rate

(n) Number
of
Respondents

1,379.00

0.0025

3.84

0.50

300

0.60

500

BURDEN HOURS: Burden calculation is based on a respondent pool of individuals that complete a set of
instruments before the internship begins (Table 2) as follows:
Table 2. Baseline Instruments

Data Collection Source
Internship Expectations
Pre-Survey
Research Preparation
Pre-Survey
Grit Outcome Survey

Statistically
Adjusted Number
of Respondents

Frequency
of Response

Total minutes
per Response

Total Response
Burden in Hours

500

1

3

25

500

1

3

25

500

1

3

25
75.0

Total
VIII.

DATA CONFIDENTIALITY MEASURES: Any information collected under the purview of this clearance will be
maintained in accordance with the Privacy Act of 1974, the e-Government act of 2002, the Federal Records Act,
and as applicable, the Freedom of Information Act in order to protect respondents’ privacy and the
confidentiality of the data collected.

IX.

PERSONALLY IDENTIFIABLE INFORMATION:
1. Is personally identifiable information (PII) collected? Yes  No
2. If yes, will any information that is collected by included in records that are subject to the Privacy Act of
1974? Yes  No
3. If yes, has an up-to-date System of Records Notice (SORN) been published?
Yes  No
Published in October 2007, the Applicable System of Records Notice is NASA 10EDUA, NASA Education
Program Evaluation System - http://www.nasa.gov/privacy/nasa_sorn_10EDUA.html.

APPLICABLE RECORDS: Completed surveys will be retained in accordance with NASA Records Retention
Schedule 1, Item 68D. Records will be destroyed or deleted when ten years old, or no longer needed,
whichever is longer.
X.

PARTICIPANT SELECTION APPROACH:
Does NASA Education have a respondent sampling plan? Yes  No

Wills

NASA Office of Education

3

If yes, please define the universe of potential respondents. If a sampling plan exists, please
describe? The universe of potential respondents includes undergraduate students participating in a
NASA internship.

If no, how will NASA Education identify the potential group of respondents and how will they
be selected? Not applicable.
XI.

INSTRUMENT ADMINISTRATION STRATEGY
Describe the type of Consent:  Active  Passive
4. How will the information be collected:
 Web-based or other forms of Social Media (Survey Monkey)
 Telephone
 In-person
 Mail
 Other
5. Will interviewers or facilitators be used?  Yes  No

XII.

DOCUMENTS/INSTRUMENTS ACCOMPANYING THIS REQUEST:
 Consent form
 Instrument (attitude & behavior scales, and surveys)
 Protocol script (Specify type________________)
 Instructions
 Other (Specify ________________)

XIII.

GIFTS OR PAYMENT:  Yes  No

XIV.

ANNUAL FEDERAL COST: The estimated annual cost to the Federal government is $280. The cost is based on an
annualized effort of 8.5 person-hours at the evaluator’s rate of $33/hour for administering the survey instruments,
collecting and analyzing responses, and editing the survey instruments for ultimate approval through the
methodological testing generic clearance with OMB Control Number 2700-0159, exp. 04/30/2018.

XV.

CERTIFICATION STATEMENT:

I certify the following to be true:
1. The collection is voluntary.
2. The collection is low burden for respondents and low cost for the Federal Government.
3. The collection is non-controversial and does raise issues of concern to other federal agencies.
4. The results will be made available to other federal agencies upon request, while maintaining
confidentiality of the respondents.
5. The collection is targeted to the solicitation of information from respondents who have experience
with the program or may have experience with the program in the future.
Sponsor: Carolyn Knowles
Title:
Director, NASA Internships, Fellowships, and Scholarships
Office of Education
Email address or Phone number: carolyn.knowles-1@nasa.gov
Date: 12/15/2015

Wills

NASA Office of Education

4

Works Cited1
Barclay, S., Todd, C., Finlay, I., Grande, G., & Wyatt, P. (2002). Not another questionnaire! Maximizing
the response rate, predicting non-response and assessing non-response bias in postal
questionnaire studies of GPs. Family Practice, 19(1), 105-111.
Besterfield-Sacre, M., Shuman, L. J., Wolfe, H., Atman, C. J., McGourty, J., Miller, R. L., . . . Rogers, G. M.
(2000). Defining the outcomes: A framework for EC-2000. IEEE Transactions on Education, 43(2),
100-110.
Crede, E., & Borrego, M. (2013). From ethnography to items: A mixed methods approach to developing a
survey to examine graduate engineering student retention. Journal of Mixed Methods Research,
7(1), 62-80.
Drennan, J., & Hyde, A. (2008). Controlling response shift bias: The use of the retrospective pre-test
design in the evaluation of a master's programme. Assessment & Evaluation in Higher Education,
33(6), 699-709.
Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perserverance and passion
for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087-1101.
Feldon, D. F., Maher, M. A., & Timmerman, B. E. (2010). Performance-based data in the study of STEM
Ph.D. Education. Science, 329, 282-283.
Gilmore, J., Vieyra, M., Timmerman, B., Feldon, D., & Maher, M. (2015). The relationship between
undergraduate research participation and subsequent research performance of early career
STEM graduate students. Journal of Higher Education, 86(6), 834-863.
Goedhart, H., & Hoogstraten, J. (1992). The retrospective pretest and the role of pretest information in
valuative studies. Psychological Reports, 70(3), 699-704.
Herrington, A., & Herrington, J. (2005). What is an authentic learning environment? In A. Herrington, & J.
Herrington (Eds.), Authentic learning environments in higher education (pp. 1-14). Hershey, PA:
IGI Global. doi:10.4018/978-1-59140-594-8
Howard, G. S. (1980). Response-shift bias: A problem in evaluating interventions with pre/post selfreports. Evaluation Review, 4(1), 93-106.
Komrey, J. D., & Bacon, T. P. (1992). Item analysis of acheivement tests based on small numbers of
examinees. Paper presented at the annual meeting of the American Educational Research
Association. San Francisco.
Linn, P. (2004). Theories about learning and development in cooperative education and internships. In P.
L. Linn, A. Howard, & E. Miller, Handbook for research in cooperative education and internships
(pp. 11-28). Mahwah, NJ: Erlbaum Publishers.
Mark, M. M., & Reichardt, C. S. (2009). Quasi-experimentation. In L. Bickman, & D. J. Rog (Eds.), The
SAGE handbook of applied social research methods (2nd ed., pp. 182-214). Thousand Oaks, CA:
SAGE Publications, Inc.
Nimon, K. (2014). Explaining differences between retrospective and traditional pretest self-assessments:
Competing theories and empirical evidence. International Journal of Research & Method in
Education, 37(3), 256-269.
Norman, G. (2003). Hi! How are you? Response shift, implicit theories and differing epistemologies.
Quality of Life Research, 12, 239-249.
Pelfrey, Sr., W. V., & Pelfrey, Jr., W. V. (2009). Curriculum evaluation and revision in a nascent field: The
utility of the retrospective pretest-posttest model in a Homeland Security program of study.
Evaluation Review, 33(1), 54-82.

1

All works cited are available to share via PDF upon request. Please request directly from lisa.e.wills@nasa.gov.

Wills

NASA Office of Education

5

Reckase, M. D. (2000). The minimum sample size needed to calibrate items using the three-parameter
logistic model. Paper presented at the annual meeting of the American Educational Research
Association. New Orleans.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for
generalized causal inference. (2nd ed.). Boston, MA: Houghton Mifflin Company.
Watson, J. (2001). How to Determine a Sample Size: Tipsheet #60. Retrieved from Penn State
Cooperative Extension: http://www.extension.psu.edu/evaluation/pdf/TS60.pdf
Xie, Y., Fang, M., & Shauman, K. (2015). STEM Education. Annual Review of Sociology, 41, 331-357.

Wills

NASA Office of Education

6


File Typeapplication/pdf
AuthorTeel, Frances C. (HQ-JF000)
File Modified2016-01-19
File Created2016-01-19

© 2024 OMB.report | Privacy Policy