Request to Conduct Uability Test Inteviews with Incentives - MIST

OMB Generic Clearance Memo for usability testing 07-29-14.docx

SRS-Generic Clearance of Survey Improvement Projects for the Division of Science Resources Statistics

Request to Conduct Uability Test Inteviews with Incentives - MIST

OMB: 3145-0174

Document [docx]
Download: docx | pdf

MEMORANDUM


Date: July 29, 2014


To: Shelly Wilkie Martinez, Desk Officer

Office of Management and Budget


From: John Gawalt, Division Director

National Center for Science and Engineering Statistics


Via: Suzanne Plimpton, Reports Clearance Officer

National Science Foundation


Subject: Request to Conduct Usability Test Interviews with Incentives under Generic Clearance


This memorandum is to inform you of NSF’s plans to conduct usability testing of the web version of the Microbusiness Innovation Science and Technology survey under the generic clearance for survey improvement projects (OMB number 3145-0174). These interviews will test respondents’ ability to successfully and easily navigate the survey, as well as identify any problems with data entry or automatic data calculations. By testing the web survey instrument we will be better able to understand how easy or difficult it is for respondents to complete the survey, their reactions to the survey, and how they navigate the survey. In addition, this testing will allow us to address any possible errors before going into the field.


Background


The National Center for Science and Engineering Statistics (NCSES) of NSF is broadly tasked with measuring the role of science and technology (S&T) in the United States’ economy and abroad. A major component of this activity is its sponsorship of the Business Research and Development (R&D) Innovation Survey (BRDIS), which collects information annually on research and development and related activities performed within the United States by industrial firms. However, businesses with fewer than five employees are excluded from this survey.


The National Academy of Sciences’ Committee on National Statistics (CNSTAT) reviewed NSF’s portfolio of R&D surveys and, in 2004, recommended that NSF explore ways to measure firm innovation and investigate the incidence of R&D activities in growing sectors, such as small business enterprises, not currently covered by BRDIS.


As a result, NSF is planning to undertake a survey of R&D and other innovation-related items among very small (i.e., micro), independent U.S. businesses with fewer than five employees. In addition to general business information -- primary business activity (NAICS code), year business was formed, and number of employees -- this survey proposes to collect data on R&D, innovation funding, employment, related activities (such as sales of significantly improved goods and services; operating agreements and licensing activities; technology transfer; patents and intellectual property; and sources of technical knowledge), measures of these small firms’ entrepreneurial effectiveness, and demographic characteristics of the entrepreneur.

NSF has completed significant work on the MIST survey. NSF has completed cognitive interviews, directed both a data user workshop and an expert panel of academics and other stakeholders, pretested the questionnaire, and completed debriefing interviews. Under a separate OMB clearance request, NSF is requesting approval to conduct additional data collection testing including methodological experiments with 4,000 microbusinesses followed by debriefing interviews of 50 respondents and non-respondents. As a web version of the questionnaire will be utilized during the additional testing NSF plans to test the web survey prior to conducting the additional testing.


Research Plan

To select microbusinesses for the usability testing, a Small Business Administration (SBA) database (http://dsbs.sba.gov/dsbs/search/dsp_dsbs.cfm) will be used. This site allows users to search for businesses with no more than four employees and with specified NAICS codes (other criteria, such as the state, can also be added).

As NSF’s primary interest in MIST is measuring participation in innovation and R&D, testing will focus on those industries that are most likely to participate in R&D. That includes a variety of manufacturing industries (i.e., Pharmaceutical and Medicine Manufacturing, Paint, Coating, and Adhesive Manufacturing, Engine, Turbine, and Power Transmission Equipment Manufacturing, etc.), Software Publishers, as well as service industries such as Architectural, Engineering, and Related Services or Computer Systems Design and Related Services.

The businesses first will be asked to complete the entire questionnaire while being observed. The intention is to allow businesses to simulate the experience of completing a questionnaire, as normally as possible; for this reason, questions and exercises will generally be delayed until the end. For example, if the respondent were asked to discuss the instruction page in extensive detail at the start, this might result in the respondent paying more attention to the instructions than would normally be the case (people often skip past the instructions) and change the rest of the experience. If the respondent does not normally read the instructions, we would like to know how well the questionnaire works for people who do not read the instructions.

Respondents will be encouraged at the start to offer comments as he/she progresses through the questionnaire. Because of the presence of an interviewer, the respondent may sometimes ask questions about the web questionnaire while completing it; if so, the interviewer will take notes on the issues involved and cover those in the debriefing. A checklist will be employed to keep track of the types of issues the respondent encounters.

After the respondent has completed the questionnaire, the respondent will be asked to return to the questionnaire and generate several scenarios to expand his/her range of experiences, and then to comment on what he/she has encountered. This will help to identify any issues with the error messaging and troubleshooting steps built into the survey. For example, if the respondent has never encountered an error message, the respondent will be asked to change a response and then the interviewer will observe his/her reaction to the error message. Note that this process is somewhat artificial because the respondent is being asked to respond to hypothetical situations. It is preferred to let the respondent encounter error messages or other issues naturally. However this technique will allow us to observe the participant’s responses to such situations.

For roughly one-half of the microbusinesses, we will seek to visit the businesses at the business locations so that we can observe and talk to the participants in person. For the remaining participants, respondents will be asked to complete the survey online while we share the screen using WebEx.

Each approach has advantages and disadvantages. Both approaches allow for direct observation of the users’ experiences in understanding which questions take more time and where a respondent pauses and appears to have difficulty. The in-person approach allows observation of the user’s facial expressions and hand and eye movements, which is unfortunately not possible over WebEx. (WebEx is capable of transmitting information from webcams, but we cannot count on businesses having webcams set up at their computers.) The WebEx approach allows direct observation of the user’s screen and responses, which may or may not be visible with in-person testing, depending on the size and closeness of the screen and the physical positions of the user, observer, and computer.

While observing the users and while probing them about their experiences, the following topics will be examined:


  • Methods of navigation and problems encountered (e.g., do users make use of the menu or do they proceed directly through the questionnaire; do they go backwards to look at previous responses; and do they express frustration in not being able to go where they want)

  • Types of errors made (this might include skipped questions, errors that result in error messages, and errors in completing matrices)

  • Frequency of and response to error messages (e.g., do users frequently make such errors; do they read through and understand the error messages; do they understand the process for correcting an error; and do they attempt to correct their errors or proceed through the questionnaire without making corrections?)

  • Frequency of skipping questions, either temporarily (i.e., with the intention of returning) or permanently, and the reasons (e.g., errors, lack of data, and unwillingness to provide data)

  • Use of help screens and other features (e.g., how much do they make use of the special features versus proceeding directly through the questionnaire)

  • Problems or issues at specific segments of the questionnaire, such as logging in, working with matrices, and submitting their final responses

  • User perceptions of the strengths and weaknesses of the web design (e.g., are there any features that they found particularly useful or frustrating, and for what reasons)


To help facilitate the interviews, it is requested that NSF receive OMB approval to offer respondents an incentive of $40 in order to minimize recruiting costs and maximize the time that these busy respondents will meet with us. Extensive research indicates that incentives do make a difference in increasing cooperation rates. In particular, incentives for interviews such as these are helpful because of the extra demands they place on the participant: they often consume more time than completing the questionnaire itself, the time must be used as a block (rather than doing the questionnaire in pieces at the respondent’s convenience), and the time must be scheduled, giving the participant less flexibility to meet any needs that arise. Given the special nature of the study and the respondent, we believe an incentive is justified and beneficial in minimizing recruiting costs. These are very small businesses, and in nearly all cases the respondent will be the owner of the business. We feel that the incentive payment will increase respondent motivation in helping us evaluate the questionnaire. Incentives will be given to respondents as a check in person after the conclusion of the interview or through the US Postal Service when the interviews are conducted via WebEx.


Anticipated Burden


During FY 2014, we plan to conduct a maximum of 20 usability testing interviews. We aim to minimize respondent burden by asking each company only a subset of questions. The estimated time for completion of the interview is about 60 minutes each for a total of 20 hours (20 interviews x 60 minutes = 20 hours). We also expect to contact 80 companies for recruiting purposes. We expect the recruiting process to take on average 5 minutes per company resulting in 7 hours (80 company contacts x 5 minutes = 6.7 hours). Thus, we estimate a total burden of no more than 27 hours for this phase of our research.


The contact person for questions regarding this research is:


Audrey Kindlon, Survey Statistician

National Center for Science and Engineering Statistics

National Science Foundation

4201 Wilson Boulevard, Suite 965

Arlington, VA 22230

Tel: 703-292-2332

e-mail: akindlon@nsf.gov




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorKindlon, Audrey E.
File Modified0000-00-00
File Created2021-01-29

© 2024 OMB.report | Privacy Policy