justification memo

TO 25_Usability testing_generic clearance memo_final_Revised.pdf

SRS-Generic Clearance of Survey Improvement Projects for the Division of Science Resources Statistics

justification memo

OMB: 3145-0174

Document [pdf]
Download: pdf | pdf
MEMORANDUM
Date:

March 11, 2016

To:

Shelly Wilkie Martinez, Desk Officer
Office of Management and Budget

From:

John Gawalt, Director
National Center for Science and Engineering Statistics

Via:

Suzanne Plimpton, Clearance Officer
National Science Foundation (NSF)

Subject:

Notification of data collection under generic clearance (Revised)

The purpose of this memorandum is to inform you of NSF’s plan to conduct usability
testing under the generic clearance for survey improvement projects (OMB #3145-0174).
This study is part of a larger set of activities assessing how well our current dissemination
tools and data products meet the needs of our users. We have completed a user analysis
and will follow this proposed study with a formal user needs analysis. This request
describes our plan to conduct usability tests on NCSES data tools and products by
observing users as they interact with these components.
Background
The National Science Foundation’s (NSF) National Center for Science and Engineering
Statistics (NCSES) is the principal source of analytical and statistical reports, data, and
related information that describe and provide insight into the nation’s science and
engineering resources. All of the Center’s data are released in electronic format on NSF’s
centrally administered web server (http://www.nsf.gov/statistics/). NCSES also manages
additional external and internal servers that supplement the central server’s content.
To support the analysis, dissemination and archiving of the Center’s survey data, NCSES
maintains a data system, comprised of several major components. The major components
of the data system include: the SESTAT database and SESTAT Data Tool, custom-built
web-based data tabulation and information applications which respectively store and
provide access to both restricted use and public use microdata for the SESTAT surveys
(http://www.nsf.gov/statistics/sestat/); the data repository, an Oracle database with a SAS
software application layer for increased user access, analytical capabilities and data
dissemination activities; the WebCASPAR data system, an on-line data tool application
maintained by a separate contract vehicle (https://ncsesdata.nsf.gov/webcaspar/); and a
newly-developed data table product, called eTables, that is currently generated and
publicly disseminated on the web using SAS stored processes
(http://ncsesdata.nsf.gov/gradpostdoc/2013/).

1

With increasing technological advancements and the changing needs and preferences of
its stakeholders, NCSES strives to make improvements to its data dissemination activities
by identifying those processes, products and tools that may require reassessment and
refinement. NCSES faces the same challenges that other federal statistical agencies
experience in understanding and meeting the needs of its data users. We participate in
and benefit from knowledge and experiences shared during discussions among members
of the Federal Interagency Dissemination Group, which brings together the data
dissemination leaders within the federal statistical system. The group is charged with
identifying the ongoing challenges in dissemination and sharing of best practices,
including usability testing, to help support the missions of their respective agencies.
The goal of this project is to develop insight into the functionality of NCSES web-based
tools, applications and data products, including SESTAT, WebCASPAR, eTables, the
data repository, and other web-based data products currently generated from the NCSES
data system. The information gathered will establish a baseline understanding of
NCSES’s current data products and how they facilitate or inhibit access to, and analysis
of, data collected by NCSES. The results of the usability testing will guide NCSES in
prioritizing resources for refining these data tools and products. Results also will inform
NCSES decisions on the future development of new products that better the needs of
NCSES’s target audience.
Recruitment
NCSES plans to conduct a series of tests to evaluate the usability of these components by
a sample of data users. The sampling method for the interviews is nonprobabilistic,
purposive, and heterogeneous. Potential participants will be selected to be broadly
representative of a participant group and to have knowledge and/or interest in science and
engineering statistics to represent the current user or potential user base, as opposed to
the general population. We plan to recruit participants from the U.S. through targeted
email and telephone messages without any regard to geography. (See Attachment B for
contact scripts.) To maximize response rates as well as reduce the burden of filtering out
potential participants that do not have an interest in science and engineering statistics,
participants will be targeted based on their past interactions with information on the
science and engineering enterprise. Sources for recruitment include individuals who have
contacted an NSF survey manager for assistance, signed up for the National Center for
Science and Engineering RSS feed, or published papers or reports that incorporate data
on the science and engineering enterprise. We expect testing to start in spring of 2016 and
continue for 12 weeks.

2

Project Description
Usability testing will be conducted in order to observe users while they use the tools and
systems displayed in the following table.
Product	
  

Detail	
  

WebCASPAR	
  

The	
  WebCASPAR	
  Data	
  System 	
  is	
  a	
  custom	
  NCSES	
  data	
  tool	
  that	
  provides	
  
access	
  to	
  a	
  number	
  of	
  data	
  sources.	
  WebCASPAR	
  is	
  primarily	
  focused	
  on	
  
providing	
  institutional	
  data.	
  	
  

SESTAT	
  

The	
  SESTAT	
  Data	
  Tables	
  and	
  Metadata	
  Explorer 	
  is	
  a	
  custom	
  NCSES	
  data	
  
tool	
  that	
  reports	
  anonymous	
  data	
  about	
  individual	
  survey	
  respondents.	
  	
  

SED	
  Tabulation	
  Engine	
  

Survey	
  of	
  Earned	
  Doctorates	
  (SED)	
  Tabulation	
  Engine(s) 	
  is	
  a	
  custom	
  NCSES	
  
data	
  tool	
  that	
  provides	
  alternate	
  access	
  to	
  a	
  subset	
  of	
  the	
  data	
  available	
  in	
  
WebCASPAR.	
  	
  

Public	
  Use	
  Files	
  

GSS,	
  HERD,	
  and	
  FFRDC	
  Public	
  Use	
  Files 	
  provide	
  access	
  to	
  NCSES	
  data	
  as	
  
raw	
  data	
  that	
  may	
  be	
  opened	
  in	
  Excel	
  or	
  other	
  statistical	
  tools.	
  	
  

Academic	
  Institutional	
  Profiles	
  

Academic	
  Institutional	
  Profiles 	
  show	
  selected	
  NCSES	
  information	
  for	
  U.S.	
  
academic	
  institutions.	
  	
  

State	
  Profiles	
  	
  

Science	
  and	
  Engineering	
  State	
  Profiles 	
  is	
  an	
  interactive	
  tool	
  that	
  presents	
  
STEM	
  workforce	
  and	
  R&D	
  data	
  for	
  U.S.	
  states.	
  

eTables	
  

The	
  NCSES	
  eTables 	
  present	
  preformatted	
  information	
  rather	
  than	
  the	
  
user-­‐generated	
  tables	
  that	
  appear	
  in	
  data	
  tools	
  like	
  WebCASPAR,	
  SESTAT,	
  
and	
  the	
  SED	
  Tabulation	
  Engine.	
  

1

2

3

4

5

6

7

The testing is designed to generate data on how these tools perform during usage, what
features and capabilities work well for people, and where the tools may create challenges
for users. As outlined in Attachment A (Usability Testing Draft Testing Plan/Script), we
will ask participants to perform a set of tasks with NCSES data tool(s) and ask them to
explain what they are doing while performing the task. With regard to identifying tasks to
be tested, we inventoried all of the functionalities present in each of the data tools (e.g.
1

	
  WebCASPAR:	
  https://ncsesdata.nsf.gov/webcaspar/	
  	
  
	
  SESTAT:	
  http://www.nsf.gov/statistics/sestat/	
  	
  
3
	
  SED	
  Tabulation	
  Engine:	
  https://ncses.norc.org/NSFTabEngine/	
  	
  
4
	
  Public	
  Use	
  Files:	
  GSS	
  	
  http://www.nsf.gov/statistics/srvygradpostdoc/pub_data.cfm,	
  HERD	
  
http://www.nsf.gov/statistics/herd/pub_data.cfm,	
  FFRDC	
  http://www.nsf.gov/statistics/ffrdc/pub_data.cfm	
  	
  
5
	
  Academic	
  Institution	
  Profiles:	
  http://ncsesdata.nsf.gov/profiles/	
  	
  
6
	
  Science	
  and	
  Engineering	
  State	
  Profiles:	
  http://www.nsf.gov/statistics/states/	
  	
  
7
	
  eTables:	
  HERD	
  http://ncsesdata.nsf.gov/herd/2012/,	
  SDR	
  http://ncsesdata.nsf.gov/doctoratework/2010/	
  and	
  
http://ncsesdata.nsf.gov/doctoratework/2013/,	
  NSRCG	
  http://ncsesdata.nsf.gov/recentgrads/2010/,	
  GSS	
  
http://ncsesdata.nsf.gov/gradpostdoc/2012/	
  and	
  http://ncsesdata.nsf.gov/gradpostdoc/2013/,	
  FFRDC	
  
http://ncsesdata.nsf.gov/ffrdc/2013/	
  
2

3

sorting, filtering, linking, downloading), and then identified tasks that would test an
individual’s ability to effectively use one or more of those functionalities. Through direct
observation and test subject answers, each test will provide insight into the user
experience for each tool, and simultaneously elicit direct feedback on each tool.
Participants will initially be assigned to one of the following participant groups based on
their affiliation:
• Policy Analysts
• Media
• Academia
• Industry
• Nonprofit Organizations
• Casual Information Seekers
The next table displays the mapping of how participants in the study will be directed to
different products according to their participant group. At the beginning of the test,
participants will be asked about their general job responsibilities, their skill level with
data analysis, and their knowledge of NCSES tools. Specific tasks (displayed in
Attachment A) will be assigned based on the user skill level reported at the beginning of
the interview or the functionality the facilitator is seeking to test to assure good coverage
on all tools. In the table, “primary” indicates that this is the primary tool that will be
tested with the indicated participant group, while the tools labeled as secondary will be
tested as time allows in that participant group’s test sessions.
	
  
Product	
  Selections	
  by	
  Participant	
  Group	
  

	
  

Analysts	
  

Media	
  

Academia	
  

Industry	
  

Nonprofit	
  
Organizations	
  

Casual	
  Information	
  
Seekers	
  

Academic	
  
Institutional	
  
Profiles	
  

Secondary	
  

Primary	
  

Primary	
  

Primary	
  

Primary	
  

Primary	
  

State	
  Profiles	
  

Secondary	
  

Primary	
  

Secondary	
  

Primary	
  

Primary	
  

Primary	
  

WebCASPAR	
  

Primary	
  

	
  

Secondary	
  

	
  

	
  

	
  

SESTAT	
  

Primary	
  

	
  

Secondary	
  

	
  

	
  

	
  

SED	
  
Tabulation	
  
Engine	
  

Primary	
  

	
  

Secondary	
  

Secondary	
  

Secondary	
  

	
  

Secondary	
  

Secondary	
  

Secondary	
  

	
  

	
  

Secondary	
  

Primary	
  

	
  

Secondary	
  

	
  

	
  

	
  

	
  

eTables	
  	
  
Public	
  Use	
  
Files	
  	
  

4

	
  
Test sessions will be conducted virtually over the Internet using the GoToMeeting
collaboration service. Each participant will be asked to give consent for participation and
to be recorded for note-taking purposes. (The consent form is contained in Attachment
C.) Each test session will be 45 to 60 minutes in length. Individuals joining the test
session will be asked to share their screen using the GoToMeeting software and perform
common tasks using the data products and tools, while “talking us through” what they are
doing at the time. Sessions will be recorded through the GoToMeeting tool. While the
test subject is narrating his or her actions, we expect to have an informative dialog with
that user regarding specific tasks, goals, anticipated outcomes, and areas for
improvement.
While observing the users and probing them about their experiences, the following topics
will be examined:
•

•

•

•
•
•

Methods of navigation and problems encountered (e.g., do users make use of the
instructions or do they proceed directly to the tool or survey page; do they go
backwards to look at previous screens; and do they express frustration in not
being able to go where they want)
Types of errors made (any error message generated or problems that prevent
completion of the task—i.e., critical errors and non-critical errors). An error is
designated as “critical” when the user is unable to complete the task successfully
or encounters a system-level error message.
Frequency of and response to error messages (e.g., do users frequently make such
errors; do they read through and understand the error messages; do they
understand the process for correcting an error; and do they attempt to correct their
errors or give up?)
Use of help screens and other features (e.g., how much do they make use of the
special features versus proceeding directly through the tool or NCSES websites)
Problems or issues with specific features and capabilities of the tool
User perceptions of the strengths and weaknesses of the web design (e.g., are
there any features that they found particularly useful or frustrating, and for what
reasons)

Burden Information
We expect to invite up to 100 participants with the goal of obtaining participation from
10 to 20 people. We expect the public burden of the recruiting process to take an average
of five minutes per person, resulting in approximately 8.3 hours of burden (100 experts x
5 minutes = 500 minutes ≈ 8.3 hours). The estimated time for the testing is 1 hour. At the
maximum 20 people will participate; therefore, the total maximum burden for the survey
activity would be 20 hours (20 responses x 1 hour = 20 hours). Thus, we estimate a total
burden of 28.3 hours for this research.

5

Incentive Payments
There are no incentive payments.
Contact Information
The contact person for questions regarding this data collection is:
May Aydin
Supervisory Program Director
National Science Foundation
(703) 292-4977
maydin@nsf.gov
Attachments
A – Test plan
B – Draft contact scripts
C – Consent form
cc: Joydip Kundu
May Aydin
Rebecca L. Morrison

6


File Typeapplication/pdf
File TitleTO 25_Usability testing_generic clearance memo_final_Revised
AuthorMorrison, Rebecca
File Modified2016-03-11
File Created2016-03-11

© 2024 OMB.report | Privacy Policy