Supt. Stmt. Part B - NASA ED Performance Measurement and Evaluation Testing (4-27-18)

Supt. Stmt. Part B - NASA ED Performance Measurement and Evaluation Testing (4-27-18).pdf

Generic Clearance for the NASA Office of Education Performance Measurement and Evaluation (Testing)

OMB: 2700-0159

Document [pdf]
Download: pdf | pdf
Section

Page

B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS ..... 1
1. Respondent Universe and Sampling Methods ........................................................................ 1
2. Procedures for Collecting Information ................................................................................... 3
3. Methods to Maximize Response ............................................................................................. 7
4. Testing of Procedures ............................................................................................................. 8
5. Contacts for Statistical Aspects of Data Collection ................................................................ 8
References ...................................................................................................................................... 9
APPENDIX A: Descriptions of Methodological Testing Techniques .................................... 11
APPENDIX B: Privacy Policies and Procedures...................................................................... 14
List of Tables ............................................................................................................................... 16

GENERIC CLEARANCE FOR THE NASA
OFFICE OF EDUCATION/PERFORMANCE MEASUREMENT AND EVALUATION
(TESTING) SUPPORTING STATEMENT

B.

COLLECTION OF INFORMATION EMPLOYING STATISTICAL
METHODS

1. RESPONDENT UNIVERSE AND SAMPLING METHODS
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent
selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units,
households, or persons) in the universe covered by the collection and in the corresponding sample are to be
provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate
expected response rates for the collection as a whole. If the collection had been conducted previously, include the
actual response rate achieved during the last collection.

Respondent Universe
The respondent universe for NASA Education methodological testing consists of individuals
who either participate in NASA Education activities or are staff managing educational activities
(both at NASA and funded through NASA grants, cooperative agreements, and contracts). It is
difficult to anticipate and define all the types of potential respondents under this generic
clearance beyond the most immediate needs for this generic clearance, but below are descriptions
of the individuals who could represent the respondent universe in this generic submission:
•
•
•
•
•

Undergraduate and graduate students participating in NASA-funded internships
and fellowships;
P-12 and informal educators and higher education faculty participating in NASA-funded
educator professional development;
Precollege students participating in NASA-funded education project and activities;
NASA civil servants who manage projects and activities; and
Primary investigators and managers of NASA-funded grants, cooperative agreements,
and contracts.

Respondent categories with a corresponding estimate for each Potential Respondent Universe
(N) anticipated for this generic clearance can be found below (See Table 3). Expected
Response Rate is defined as the past rate of response observed in NASA Education for that
particular Respondent Category. For instance, precollege, undergraduate, graduate, and postgraduate students are especially responsive to NASA Education requests for information
because at the onset of establishing a relationship with NASA, they are not eligible to apply
for NASA internships and fellowships (NIF) with incomplete information. For the reason that
these respondents are aware of the potential to obtain multiple opportunities after having been
awarded a first time, these respondents willingly partner with NASA Education to maintain
current contact information in order to access current information pertaining to NIF
opportunities. The Office of Education IT (OEIT) Systems Team has IT applications which
1

can provide participants the opportunity to update contact information in a way that is less
burdensome, through automated delivery of links to OEIT’s databases wherein opening the
link will take the participant directly to a log in screen appropriate to her or his project
activity or program.
Further, for these categories of respondents who are affectively characterized as highly
motivated individuals, they understand the value of submitting feedback to optimize future NIFS
opportunities they may be awarded. Therefore, they tend to be highly motivated to cooperate
with NASA Education requests for information at a rate of 60%. The same can be said for
educator participants who must complete information in our systems in order to partake of
professional development opportunities or provide retrospective feedback on the NASA
Education activity they facilitate/instruct.
External program managers are required to submit information to our online data collection
systems and therefore it is not difficult to leverage Center points of contact to obtain data
submitted in a timely fashion. Therefore, 100% compliance with a request for information, even
in the form of participation in data collection instrumentation, is a reasonable expectation. Note
that some testing methods (e.g., focus groups, cognitive interviews) require nine participants or
less. These numbers are not reflected below. Data collection through focus groups and cognitive
interviews for testing purposes will not be used to generalize results, but rather for preliminary
item and instrument development, and piloting only1. Table 1 below reflects potential respondent
universe, expected response rates, and statistically adjusted number of respondents for each
respondent category:
Table 1: Respondent Universe and Relevant Numbers

Respondent
Category
Office of Education
Performance
Measurement
System

One Stop Shopping
Initiative

1

Undergraduate and
graduate student
profiles
Educator participant
surveys
External program
manager- Data
collection screens
Pre-College surveys
Undergraduate
surveys
Graduate surveys
Post-Graduate
surveys

Potential
Respondent
Universe
(N)

Expected
Respons
e Rate
(R)

Statistically
Adjusted
Number of
Respondents (n)

22,435

0.6

629

183,040

0.6

639

844
1,615

1.0
0.6

264
517

10,486
870

0.6
0.6

618
444

241

0.6

247
3,358

Further description of methodological testing techniques can be found in Appendix A.

2

Sampling Methods
Systematic Random Sampling
For each Respondent Category, for the purposes of piloting instruments, technology support to
the Office of Education will systematically randomly generate a list in length corresponding to n
Statistically Adjusted Number of Respondents wherein every nth element from the population
list will be selected (Hesse-Biber, 2010, p. 50.). This process attempts to create a sampling frame
that closely approximates in characteristics pertinent to the Respondent Universe for each data
collection instrument.
Nonprobability Purposive Sampling
For the purposes of focus groups and cognitive interviews, nonprobability purposive sampling
will be used wherein the research purpose determines the type of elements or respondents
selected in the sample. This sampling strategy gathers a collection of specific informants deemed
likely to exemplify patterns of behavior or characteristics reflective of the Respondent Universe
from which they are drawn necessary for the purposes specific to a particular data collection
instrument under development (Hesse-Biber, 2010, p. 126). Even in the event that a focus group
or cognitive interview fails to yield persuasive results, the PAEIM Team will not interview a
participant more than once. Instead, the PAEIM Team will recruit an entirely new focus group or
set of participants for cognitive interviews. Obtaining statistical rigor later on in the process
begins by avoiding introduction of confounding variables in the preliminary stages of instrument
design. Interviewing a participant twice in a cognitive interview or including her or him in a new
focus group may be a source of confounding variables and should be entirely avoided.

2. PROCEDURES FOR COLLECTING INFORMATION
Describe the procedures for the collection of information including:

* Statistical methodology for stratification and sample selection:
Not applicable. For the purposes of this data collection instrument development, NASA
Education has no need for instrumentation specific to subgroups within any of the Respondent
Universe categories of interest.
*Estimation procedure:
For the reason that NASA Education has experienced poor survey response rates in some
Respondent Categories pertinent to this clearance package, the number of respondents to reach in
order to obtain a statistically significant response is based on the following criteria:
Where:
n = statistically adjusted number of respondents or number of respondents in
required in final sample size
3

N = potential respondent universe (number of people in the population)
P = estimated variance in respondent universe (population), as a decimal: (0.5 for
50-50, 0.3 for 70-30)
A = Precision desired, expressed as a decimal (i.e., 0.03, 0.05, 0.1 for 3%, 5%,
10%)
Z = Based on confidence level: 1.96 for 95% confidence, 1.6449 for 90% and
2.5758 for 99%
R = Expected (estimated) response rate, as a decimal
Thus, utilizing those criteria, Yamane (1973) and Blalock (1972) offer this equation for
determining the statistically adjusted number of respondents for the final sample size:

n=

P[1-P]
A2



Z2

P[1-P]
N

R

Steps in Selecting a Sample Size:
1. Estimating N, potential respondent universe (population size): For the instance of
NASA Education project activity participants, we will use prior trends of participation to
estimate N.
2. Determining A, the desired precision of results: The level of precision is the closeness
with which the sample predicts where the true values in the population lie. The difference
between the sample and the real population is called the sampling error or margin of
error. The level of precision accepted depends on balancing accuracy and resources. High
levels of precision require larger sample sizes and thus higher costs to achieve those
samples, but a high margin of error can produce meaningless results. For social science
application in general, an acceptable margin of error or precision level is between 3% and
10%. For the purpose of this first phase of field testing, 5% is an acceptable margin of
error. In the future, given greater availability of funds for data collection instrument
development, it would be ideal to integrate a more stringent 3% margin of error into
determining sample size for the next phase of statistical testing as OEID continues to
monitor and maintain the psychometric properties of NASA Education instruments.
3. Determining Z, confidence level: Confidence level reflects the risk associated with
accepting that the sample is within the normal distribution of the population. A higher
confidence levels require a larger sample size, but avoids statistically insignificant results
4

associated with a low confidence level. For this social science application, a 95%
confidence level has been adopted for these purposes.
4. Estimating P, the degree of variability: Variability is the degree to which the attributes
or concepts being measured in the questions are distributed throughout the population
sampled. The higher the degree of variability the larger the sample size must be to
represent the concept or attribute within the sample. For the instances of this social
science application located within the context of STEM education activities, we will
assume moderate heterogeneity and estimate variability at 50%.
5. Estimating R, expected response rate: Base sample size is the smallest number of
responses required for statistically meaningful results. Calculation of sample size must
overcome non-response and should also consider a guesstimate at what a response rate
might be or it can consider response rates experienced with the population of interest.
NASA Education response rates to survey and data collection instrumentation have been
very low in some instances. Response rates vary between 20% and 60% with the
exception of program managers who are required to enter data and thus have a response
rate of 100%. Regardless of this difference in response rates within our community,
characteristics of respondents may differ significantly from non-responders. For this
reason, follow-up samples of the corresponding non-respondent population may be
undertaken to determine differences, if any exist.
For the purposes of large-scale statistical testing, consideration of the aforementioned variables
within the context of this methodological testing package to ensure that the collection of
responses statistically resembles each Respondent Universe (data collection source) results in
Table 2 on the following page.

5

Table 2: Respondent Universe and Sampling Calculations
Data Collection
Sources
Office of
Education
Performance
Measurement
System

One Stop
Shopping
Initiative

•

Undergraduate and
graduate student
profile
Educator
participant surveys
External program
manager- Data
collection screens
Pre-College
surveys2
Undergraduate
surveys
Graduate surveys
Post-Graduate
surveys

(A)

(Z2)

(P)

Base
sample
size

22,435

0.0025

3.8416

0.5

378

0.6

629

183,040

0.0025

3.8416

0.5

384

0.6

639

844

0.0025

3.8416

0.5

267

1

264

1,615

0.0025

3.8416

0.5

310

0.6

517

10,486
870

0.0025
0.0025

3.8416
3.8416

0.5
0.5

371
267

0.6
0.6

618
444

241

0.0025

3.8416

0.5

148

0.6

247
3,358

(N)

(R)

(n)

Information collected under the purview of this clearance will be maintained in
accordance with the Privacy Act of 1974, the e-Government act of 2002, the Federal
Records Act, and as applicable, the Freedom of Information Act in order to protect
respondents’ privacy and the confidentiality of the data collected3. Further information on
data security is provided in Appendix B.

* Degree of accuracy needed for the purpose described in the justification,
NASA Education project activities target STEM-related activities. Hence, instrumentation and
the sample with which data collection instrumentation is tested must correspond with a high
degree of accuracy. Moreover, because data from these instruments is used to inform policy, a
high degree of accuracy must be integrated throughout the entire data collection instrument
process.
* Unusual problems requiring specialized sampling procedures,
Not applicable. The NASA PAEIM Team does not foresee any unusual problems with
executing pilot or large-scale statistical testing via the procedures described.
* Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

2

Again, in this instance, the category “pre-college” refers to students who are over the age of consent, but have not
formally enrolled in a college or university. As such, this group of students applies for opportunities associated with
college preparation as a means to become more competitive for enrollment in college or as a means to explore
potential STEM majors prior to enrolling in college or university.
3
http://www.nasa.gov/privacy/nasa_sorn_10EDUA.html

6

•

•

•

•

Since this information collection request applies to methodological testing activities, data
collection activities will occur as needed to gather statistically significant data to
appropriately determine the validity and reliability characteristics of instruments, where
applicable, and the psychometric properties of instrumentation, where applicable.
Rigorously tested data collection instrumentation is a requirement for accurate
performance reporting. If these testing activities are not conducted, NASA will not
be able to conduct basic program office functions such as strategic planning and
management.
Without the timely and complete set of planning, execution, and outcome (survey)
data collected by valid and reliable instruments, NASA Education will be unable to
assess program effectiveness, meet federal and agency reporting requirements, or
make data informed management decisions.
Less timely and complete information will adversely affect the quality and reliability
of the above-mentioned endeavors. The degradation of any single component of our
data collection would jeopardize the integrity and value of the entire suite of
applications and the integrity of our databases.

3. METHODS TO MAXIMIZE RESPONSE
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability
of information collected must be shown to be adequate for intended uses. For collections based on sampling, a
special justification must be provided for any collection that will not yield "reliable" data that can be generalized to
the universe studied.

Maximizing response rates and managing issues of non-response are equally relevant concerns
in recruiting participants for pilot testing and routine data collection instrument administration.
In that regard, the PAEIM Team in collaboration with the OEIT System Team and Center
Education Offices, intends to utilize such methods to reach each targeted population to yield
statistically significant data from a random sample of at least 200 respondents to determine
initial reliability coefficients and validity (Komrey and Bacon, 1992; Reckase, 2000).
Furthermore, the same procedures will be employed during regular data collection through
OMB-approved instruments. Meaning, similar patterns of effectiveness of participant
recruitment strategies and response rates are inextricably linked and any procedures for
maximizing response rates, as complex as they may be, are interdependent (Barclay, Todd,
Finlay, Grande, & Wyatt, 2002). Therefore, despite the wide range of data sources being
recruited for study participation— undergraduate student, graduate student, or educator, for
instance—the same strategies for maximizing response apply.
Study Participant Recruiting
The PAEIM Team will work in collaboration with the OEIT Systems Team and Center
Education Offices to use a combination of recruitment by NASA Education Center Education
Directors and automatic email reminders adopted from Swail and Russo (2010) to maximize
participant response rates for data collection instrument testing. Participant contact lists will
7

be solicited from the appropriate Center Point of Contact (POC) for the respondent
population sampled. Center POCs will use one month to identify respondents who agree to
participate and submit their contact information to PAEIM Team. Bi-weekly reminders will
be sent and follow-up phone calls will be made to POCs as needed.
Participant Assignment to Study
Using random assignment, respondents will be assigned to an instrument for which their
responses are appropriate with the goal of having equal numbers of participants completing
instruments across testing sites and to avoid Center effects, meaning, responses to survey
instruments related to a participant’s Center culture.

4. TESTING OF PROCEDURES
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of
refining collections of information to minimize burden and improve utility. Tests must be approved if they call for
answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for
approval separately or in combination with the main collection of information.

This submission is in itself a request for authorization to conduct tests of data collection
instruments that are in development and/or require OMB approval. The purpose of cognitive and
other forms of intensive interviewing, and of the testing methods in general covered by this
request, is not to obtain data, but rather to obtain information about the processes people use to
answer questions as well as to identify any potential problems in the question items or
instruments prior to piloting with a statistically relevant sample of respondents. In some cases,
focus group and/or cognitive interview protocols will be submitted for OMB approval. In other
cases where the evidence base provided by the educational measurement research literature has
provided a basis for a reasonable instrument draft consistent with a program activity, the
instrument draft will be submitted to OMB for approval for pilot testing. The testing procedures
and methodologies to be used by NASA Office of Education and its contractors are, overall,
consistent with the educational measurement research literature evidence base and other Federal
agencies engaged in STEM program performance data collection.

5. CONTACTS FOR STATISTICAL ASPECTS OF DATA COLLECTION
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of
the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the
information for the agency.

NASA Office of Education Leadership along with Richard L. Gilmore Jr. (NASA Office of
Education, Educational Programs Specialist/Evaluation Manager) have consulted its
contractor support workforce subject matter experts in performance measure development;
data collection instrument design; quantitative, qualitative, and mixed-methods research;
inferential and descriptive statistics; user-generated content; big data analytics; education
research and analysis.

8

References
Barclay, S., Todd, C., Finlay, I., Grande, G., & Wyatt, P. (2002). Not another questionnaire!
Maximizing the response rate, predicting non-response and assessing non-response bias
in postal questionnaire studies of GPs. Family Practice, 19(1), 105-111.
Blalock, H. M. (1972). Social statistics. New York, NY: McGraw-Hill.
Colton, D., & Covert, R. W. (2007). Designing and constructing instruments for social reserch
and evaluation. San Francisco: John Wiley and Sons, Inc.
Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four
recommendations for getting the most from your analysis. Practical Assessment,
Research & Evaluation, 10(7), 1-9.
Davidshofer, K. R., & Murphy, C. O. (2005). Psychological testing: Principles and applications.
(6th ed.). Upper Saddle River, NJ: Pearson/Prentice Hall.
DeMars, C. (2010). Item response theory. New York: Oxford University Press.
Fabrigar, L. R., & Wegener, D. T. (2011). Exploratory factor analysis. New York, NY: Oxford
University Press.
Haladyna, T. M. (2004). Developing and validating multiple-choice test items (3rd ed.).
Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Hesse-Biber, S. N. (2010). Mixed methods research: Merging theory with practice. New York:
Guilford Press.
Jaaskelainen, R. (2010). Think-aloud protocol. In Y. Gambier, & L. Van Doorslaer (Eds.),
Handbook of translation studies (pp. 371-373). Philadelphia, PA: John Benjamins.
Komrey, J. D., & Bacon, T. P. (1992). Item analysis of acheivement tests based on small
numbers of examinees. Paper presented at the annual meeting of the American
Educational Research Association. San Francisco.
Kota, K. (n.d.). Testing your web application: A quick 10-step guide. Retrieved from
http://www.adminstrack.com/articles/testing_web_apps.pdf.
Reckase, M. D. (2000). The minimum sample size needed to calibrate items using the threeparameter logistic model. Paper presented at the annual meeting of the American
Educational Research Association. New Orleans.
Swail, W. S., & Russo, R. (2010). Instrument field test: Quantitative summary. Library of
Congress- Teaching with Primary Sources: Educational Policy Institute.

9

Wilson, M. (2005). Constructing measures: An item response modeling approach. New York:
Psychology Press.
Yamane, T. (1973). Statistics: An introductory analysis. New York: Harper & Row.

10

APPENDIX A: Descriptions of Methodological Testing Techniques
•

•

•

Usability testing: Pertinent are the aspects of the web user interface (UI) that impact the
User’s experience and the accuracy and reliability of the information Users submit. The
ease with which Users navigate the data collection screens and the ease at which the User
accesses the actions and functionality available during the data input process are equally
important. User experience is also impacted by the look and feel of the web UI and the
consistency of aesthetics from page to page, including font type, size, color scheme
utilized and the ways in which screen real estate is used (Kota, n.d.). The foundation for
Usability testing will be a think-aloud protocol analysis as described by Jääskeläinen
(2010) that exposes distractions to accurate input of data whereas a short Likert Scale
survey with qualitative questions will determine the extent of distraction and nature of the
distractions that impede accurate data input.
Think-aloud protocols (commonly referred to as cognitive interviewing): This data
elicitation method is also called ‘concurrent verbalization’, meaning subjects are asked to
perform a task and to verbalize whatever comes to mind during task performance. The
written transcripts of the verbalizations are referred to as think-aloud protocols (TAPs)
(Jääskeläinen, 2010, p 371) and constitute the data on the cognitive processes involved in
a task (Ericsson & Simon, 1984/1993). When elicited with proper care and instruction,
think-aloud does not alter the course or structure of thought processes, except with a
slight slowing down of the process. Although high cognitive load can hinder
verbalization by occupying all available cognitive resources, that property is of no
concern regarding the tasks under analysis that are restricted to information actively
processed in working memory (Jääskeläinen, 2010, p. 371). For the purposes of NASA
Education, think-aloud protocols will be especially useful towards the improvement of
existing and developing of new data collection screens, which are different in purpose
from online applications. Whereas an online application is an electronic collection of
fields that one either scrolls through or submits, completed page by completed page, data
collection screens represent hierarchical layers of interconnected information for which
user training is required. Since user training is required for proper navigation, think-aloud
protocols capture the user experience to incorporate it into a more user-friendly design
and implementation of this kind of technology. Lastly, data from think-aloud protocols is
used to ensure that user experiences are reliable and consistent towards collecting robust
data.
Focus group interviews: With groups of nine or less per instrument, this qualitative
approach to data collection is a matter of brainstorming to creatively solve remaining
problems identified after early usability testing of data collection screen and program
application form instruments (Colton & Covert, 2007, p. 37). Data from this type of
research will include audiotapes obtained with participant consent, meeting minutes taken
11

•

•

•

•

•

by a subject matter expert in administration assistance, and reflective comments
submitted by participants after conclusion of the focus group. Focus group interviews
may be used to refine items that failed initial reliability testing for the purposes of
retesting. Lastly, focus group interviews may be used with participants as a basis for a
grounded theory approach to instrument development or for refining an already existing
instrument to be appropriate to a specific audience.
Comprehensibility testing: Comprehensibility testing of program activity survey
instrumentation will determine if items and instructions make sense, are ambiguous, and
are understandable by those who will complete them. For example, comprehensibility
testing will determine if items are complex, wordy, or incorporate discipline- or
culturally-inappropriate language (Colton & Covert, 2007, p. 129).
Pilot testing: After program activity survey instruments have performed satisfactorily in
readability and comprehensibility testing, the next phase is pilot testing with a sample of
the target population that will yield statistically significant data, a random sample of at
least 200 respondents (Komrey and Bacon, 1992; Reckase, 2000). The goal of pilot
testing is to yield preliminary validity and reliability data to determine if items and the
instrument are functioning properly (Haladyna, 2004; Wilson, 2005). Data gleaned from
pilot testing will be used to fine-tune items and the instrument in preparation for more
complex statistical analysis upon large-scale statistical testing.
Large-scale statistical testing: Instrument testing conducted with a statistically
representative sample of responses from a population of interest. In the case of
developing scales, large-scale statistical testing provides sufficient data points for
exploratory factor analysis (EFA), a multivariate statistical method used to uncover the
underlying structure of a relatively large set of variables and is commonly used when
developing a scale, a collection of questions used to measure a particular research topic
(Fabrigar & Wegener, 2011). EFA is a “large-sample” procedure where generalizable
and/or replicable results is a desired outcome (Costello & Osborne, 2005, p.5). This
technique is particularly relevant to examining relationships between participant traits
and the desired outcomes of NASA Education project activities.
Item response approach to constructing measures: Foundations for testing that address the
importance of item development for validity purposes, address item content to align with
cognitive processes of instrument respondents, and that acknowledge guidelines for
proper instrument development will be utilized in a systematic and rigorous process
(DeMars, 2010). Validity will be determined as arising from item development, from
statistical study of item responses, and from exploring item response patterns via methods
prescribed by Haladyna (2004) and Wilson (2005.)
Split-half method: This method for determining test reliability is an efficient solution to
parallel-forms or test/retest methods. Split-half method does not require developing
alternate forms of a survey and it places a reduced burden on respondents in comparison
to other methods, requiring participation in a single test scenario rather than requiring
retesting at a later date. This method involves administering a test to a group of
12

individuals, dividing the test in half along odd and even item numbers, and then
correlating scores on one half of the test with scores on the other half of the test
(Davidshofer & Murphy, 2005).

13

APPENDIX B: Privacy Policies and Procedures
•

•
•

•

•

•

Information collected under the purview of this clearance will be maintained in
accordance with the Privacy Act of 1974, the e-Government act of 2002, the Federal
Records Act, NPR 7100.1, and as applicable, the Freedom of Information Act in order to
protect respondents’ privacy and the confidentiality of the data collected5.
Data is maintained on secure NASA servers and protected in accordance with NASA
regulations at 14 CFR 1212.605.
Approved security plans are in place for the Office of Education Performance
Measurement (OEPM) system in accordance with the Federal Information Security
Management Act of 2002 and Office of Management and Budget, Circular A-130,
Management of Federal Information Resources.
Only authorized personnel requiring information in the official discharge of their duties
are authorized access to records from workstations within the NASA Intranet or via a
secure Virtual Private Network (VPN) connection that requires two-factor hardware
token authentication.
OEPM resides in a certified NASA data center and has met strict requirements relating to
application security, network security, and backup/recovery of the NASA Office of the
Chief Information Officer’s security plan.
Data will be secured and removed from this server and location upon guidelines set out
by the NRRS/1392, 68-69. Specific guidelines relevant to the OPEM system include the
following:
o Project management records documenting basic information about projects and/or
opportunities, including basic project descriptions, funding amounts and sources,
project managers, and NASA Centers, will be destroyed when 10 years old or
when no longer needed, whichever is longer.
o Records of participants (in any format), maintained either as individual files
identified by individual name or number, or in aggregated files of multiple
participants identified by name or number, including but not limited to application
forms, personal information supplied by the individuals, will be destroyed 5 years
after the last activity with the file.
o Survey responses and other feedback (in any format) from project participants and
the general public concerning NASA educational programs, including interest
area preferences, participant feedback, and reports of experiences in projects, will
be destroyed when 10 years old or when no longer needed, whichever is longer.

5

http://www.nasa.gov/privacy/nasa_sorn_10EDUA.html

14

The following Confidentiality Statement and Paperwork Reduction Act (PRA) statement, edited
per data collection source, will be posted on all data collection screens and instruments, and will
be provided to participants in methodological testing activities per NPR 7100.1:
Privacy Act Statement: In accordance with the Privacy Act of 1974, as amended (5 U.S.C. 552a), you are
hereby notified that this study is sponsored by the National Aeronautics and Space Administration (NASA)
Office of Education, under authority of the Government Performance and Results Modernization Act
(GPRMA) of 2010 that requires quarterly performance assessment of Government programs for purposes
of assessing agency performance and improvement. Your participation is important to the success of this
study. The information we collect will help us improve the nature of NASA education project activities and
the accuracy with which NASA Office of Education can report to the stakeholders about the project
activities offered.
Paperwork Reduction Act Statement: This information collection meets the requirements of 44 U.S.C.
§3507, as amended by section 2 of the Paperwork Reduction Act of 1995. You do not need to answer these
questions unless we display a valid Office of Management and Budget (OMB) control number. The OMB
control number for this collection is 2700-0159 and expires 04/30/2018. Send comments to:
richard.l.gilmore@nasa.gov.

15

List of Tables
Table 1: Respondent Universe and Relevant Numbers .................................................................. 2
Table 2: Respondent Universe and Sampling Calculations ............................................................ 6

16


File Typeapplication/pdf
AuthorWills, Lisa E (HQ-HA000)[VALADOR INC]
File Modified2018-04-27
File Created2018-04-27

© 2024 OMB.report | Privacy Policy