Supporting Statement

UI_UX Stakeholder supporting Generic Clearance DISPATCH.docx

Generic Clearance for Usability Data Collections

Supporting Statement

OMB: 0693-0043

Document [docx]
Download: docx | pdf


OMB Control #0693-0043

Expiration Date: 06/30/2025

NIST Generic Clearance for Usability Data Collections



Stakeholder Analysis: Emerging Technology in Public Safety Dispatch Use-Cases



FOUR STANDARD SURVEY QUESTIONS



  1. Explain who will be surveyed and why the group is appropriate to survey.


The User Interface/User Experience (UI/UX) portfolio in the Public Safety Communications Research (PSCR) Division of the Communications Technology Laboratory (CTL) intends to survey and interview public safety personnel and professionals responsible for public safety dispatch, including 911 call center operations.


This study has two central aims: 1) to describe the incident response experiences of public safety dispatch workers and officials; and 2) to guide the technical and computational development of virtual environments used in public safety dispatch, user interface and user experience, and decision science research.


The Public Safety (PS) community works continuously to protect lives and property, striving to maintain standards of performance and minimize losses across a wide range of incidents. These incidents range from ordinary events to large scale disasters. Effective responses thereby require accurate and efficient communication across disciplines, jurisdictions, and incident command structures. Extended Reality (XR) technologies, for example augmented and virtual reality, have the potential to improve these communications by sorting, integrating, and displaying information more effectively than standard approaches. Additionally, XR offers a low-cost, high-fidelity approach to communications and incident response training. Nonetheless, XR is a rapidly developing field and novel technologies are available to users at an increasing pace. Therefore, research on user experience, particularly in the specialized fields of public safety, is limited. This study aims to describe the incident response experiences of public safety dispatch stakeholders, that is individuals working within or contributing to the public safety dispatch community including 911 call center operations, by interviewing these users about their experiences. Results from this study will be used for three purposes: 1) to inform the science on the experiences, beliefs, and behavior of the public safety dispatch community; 2) to develop stakeholder-informed technical requirements; and 3) to use the requirements to design more effective research and development programs. The goal of this work is to better capture the technical and non-technical aspects of both routine and challenging training scenarios that first responders may face in the field. This work supports research and development programs for public safety technology by academia, industry, and government agencies.


Additional background if needed:


References

Choong, Y. Y., Dawkins, S. T., Furman, S. M., Greene, K., Prettyman, S. S., & Theofanos, M. F. (2018). Voices of First Responders–Identifying Public Safety Communication Problems: Findings from User-Centered Interviews, Phase 1, Volume 1.


Fischhoff, B., & Broomell, S. B. (2020). Judgment and decision making. Annual Review of Psychology, 71(1), 331-355. Hammond, K. R (1988). Judgment and decision making in dynamic tasks. Information and Decision Technologies, 14, 3-14.


Hastie, R., & Dawes, R. M. (2009). Rational choice in an uncertain world: The psychology of judgment and decision making. Sage Publications.


Kahneman, D., Slovic, P., & Tversky. A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press.


Whitlock, M., Chelen, J., & Ledgerwood, S. (2022). Toward VR-based Training Modules for Public Safety Scenarios. In ACM CHI ’22 - VR [we are] Training Workshop, April 25, 2022.


Wickens, C. D., Helton, W. S., Hollands, J. G., & Banbury, S. (2021). Engineering psychology and human performance. Routledge.



  1. Explain how the survey was developed including consultation with interested parties, pre-testing, and responses to suggestions for improvement.


The survey portion of this study is an adaptation of Chelen et al. (2021). The interview questions were first developed by Chelen through support from Drs. Baruch Fischhoff and Gabrielle Wong-Parodi in the Department of Engineering and Public Policy at Carnegie Mellon University. The questions were refined through collaborative research supported by Dr. Amber Barnato at The Dartmouth Institute at the Geisel School of Medicine at Dartmouth College. The questions are informed by two pilot studies conducted by the User Interface/User Experience (UI/UX) portfolio and the Public Safety Communications Research (PSCR) Division in Spring and Fall of 2022. These pilot efforts included revisions of questions designed by the PSCR Usability Team (Gaithersburg Campus). The survey and interview questions were reviewed and amended with input from public safety, dispatch, and 911 call center experts with the First Responder Network Authority.


Reference:

Chelen JSC, White DB, Zaza S, Perry AN, Feifer DS, Crawford ML, Barnato AE. US Ventilator Allocation and Patient Triage Policies in Anticipation of the COVID-19 Surge. Health Secur. 2021 Sep-Oct;19(5):459-467. doi: 10.1089/hs.2020.0166. Epub 2021 Jun 9. PMID: 34107775.



  1. Explain how the survey will be conducted, how customers will be sampled if fewer than all customers will be surveyed, expected response rate, and actions your agency plans to take to improve the response rate.


The study procedure has two phases. First, a recruitment survey. Second, semi-structured interviews with purposively sampled public safety personnel and professionals responsible for public safety. 1. Survey delivered to a broad, potentially overlapping sampling frame via preexisting public safety and professional society distribution lists, as well as participant referrals. Survey items will include: (1) Consent for interview; (2) Demographics and experience related questions; (3) training status and upload; (4) extended reality knowledge and background; (5) free-text field to describe additional input; (6) respondent name and contact information for recontact for future interview (optional). 2. Semi-structured, recorded interviews with purposively sampled public safety personnel and professionals responsible for public safety dispatch.


This study includes 18 participants recruited through a previous study (OMB Control #0693-0043) who consented to participation, received and completed the survey, consented to an interview, and provided contact information, but did not meet all inclusion criteria. We will complement this sample with additional recruitment to fulfill our objective of at least 20 participants.


The study uses an online Qualtrics survey that will be distributed as a URL emailed to a broad, potentially overlapping sampling frame using pre-existing public safety and professional society distribution lists, as well as participant referrals. Recipients will have 2-4 weeks to consider participation before the survey is closed and will receive 2 reminder emails at these intervals.


The recruitment survey is intended to be brief, and respondents are not required to answer every question. Qualtrics estimates that the survey takes approximately 12 minutes to complete. However, this is a high estimate inflated by the text entry options we provide to accommodate respondents who may want to provide more input.


We intend to recruit at least 20 interview participants for reasonable qualitative interpretation. Given the response rates discussed next, we intend to use distribution lists totaling over 1500 recipients combined and therefore expect approximately 150 survey responses. Therefore, the burden, including time to navigate to the survey, is calculated to be 150 (participants) x 15 minutes / 60 = 37.5 burden hours.


Past pilots suggest that approximately 10% of the email recipients will open the email and navigate to the Qualtrics URL. We plan to use distribution lists totaling over 1500 recipients combined and therefore expect approximately 150 survey responses. Of the survey responses, past pilots indicated high post-URL navigation completion rates (>90%). However, the literature on similar studies suggests a 30% or lower completion rate. We expect approximately 45 survey responses. Past studies suggest 27 (60%) of the survey respondents will be willing to participate in an interview. We expect that 5% of these respondents will be unavailable due to scheduling or other incidental limitations.


If our responses indicating a willingness to be interviewed exceed 40, we will group the respondents by Public Safety Organization (PSO) and randomly select 2-3 from each PSO and exclude PSOs with single respondents. We will communicate the possibility of non-selection to the respondents in the recruitment materials.


Following the survey, the research staff will contact the respondents who provided consent to schedule interviews. The interviews will take place remotely using a NIST-password protected online conferencing platform, such as Microsoft Teams to record and auto-transcribe the content. The recording will be used to verify the transcripts and subsequently deleted.


.


We are collecting contact information for interview scheduling purposes only. We will discard this information immediately following the interview and all documentation will be deleted.



  1. Describe how the results of the survey will be analyzed and used to generalize the results to the entire customer population.



This study relies on a purposive convenience sample. Those who participate are likely to be systematically different from those who do not participate. Therefore, the data produced in this study will be used to describe this sample and not generalized.


Analyses will include exploratory data analysis, summary, and descriptive statistics (for example means and standards deviations, or medians and interquartile ranges, cross tabulations, and graphical displays). Results of the survey will be analyzed to produce descriptive statistics summarizing the users who consent to participate (including those willing and unwilling to participate in an interview). These descriptive results will be interpreted and disseminated in the context of this cross-sectional purposive sample, and its limitations. Results of the interviews will include qualitative analysis of interview transcripts.


Qualitative content analysis of the interviews will use a combination of two methods. We will compare the results of the methods, as well as assess the reliability of interviewers and coders. Method 1: The multidisciplinary team of investigators will use an adapted form of constant comparative analysis. The investigators will use iterative readings and discussion of the transcript data to identify initial themes. These themes will be operationalized in a codebook that will be applied to subsequent data. During the process of primary data collection, investigators will meet regularly to debrief interviews, as well as to document potential sources of bias and limitations. Method 2: We will use Python-based natural language processing of texts following three steps: 1) development & validation of policy word and phrase dictionary; 2) development of code to extract text-based measures; 3) analysis of text-based measures.




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDarla Yonder
File Modified0000-00-00
File Created2024-07-20

© 2024 OMB.report | Privacy Policy