Part A_HIDAS_OMB_v10 CLEAN

Part A_HIDAS_OMB_v10 CLEAN.docx

Human Interaction with Driving Automation Systems

OMB: 2127-0771

Document [docx]
Download: docx | pdf

DEPARTMENT OF TRANSPORTATION

INFORMATION COLLECTION SUPPORTING STATEMENT: PART A

TITLE OF INFORMATION COLLECTION: Human Interaction with Driving Automation Systems

OMB CONTROL NUMBER: XXXX


ABSTRACT1

The National Highway Traffic Safety Administration (NHTSA) has proposed to perform research involving the collection of information from the public as part of a multi-year effort to learn about how humans interact with driving automation systems (DAS). This research will support NHTSA in understanding the potential safety challenges associated with human-DAS interactions, particularly in the context of mixed traffic interactions where some vehicles have DAS and others do not. Within mixed traffic environments, vehicles may also have DAS that perform more or less of the driving task (i.e., different levels of automation) and come with their own sets of expectations and limitations. This research will add to the state of knowledge and is not immediately intended to inform regulations or policy.

The data collections will be performed once to obtain the target number of valid test participants. Study participants will be members of the general public and participation will be voluntary with monetary compensation provided. Participants are generally healthy individuals aged 18 and older, recruited using the University of Iowa (UI) Driving Safety Research Institute (DSRI) registry and through email blasts to University of Iowa community. The research takes place in a motion-based simulator, so participants must meet specific safety and practicality requirements. Efforts will be made to enroll a diverse age sample that broadly represents the age of the U.S. driving population and includes those at greater risk of crashing (e.g., less than 25 years of age and greater than 65 years of age). Given the sample size, targeted efforts are focused on age and sex. To address other groups, a much larger effort would be necessary. No participants will be excluded on the basis of characteristics such as race or ethnicity, skin color, or English proficiency as long as they are able to provide informed consent to participate, though these characteristics are not a targeted effort for this project.

The research will be conducted in three parts, referred to as Study 1, Study 2, and Study 3. All study procedures will be approved by the University of Iowa Institutional Review Board (IRB). Data collection will begin upon receipt of PRA clearance and will involve human-subjects data collection using the driving simulators at the University of Iowa Driving Safety Research Institute (DSRI).

The objective of the first study is to understand how humans interact with DAS in mixed traffic environments. In the first study, participants will participate in pairs with each participant driving a separate driving simulator but interacting in the same driving environment. Participants will experience one of two driving automation systems. Both members of the participant pair will provide informed consent, a pre-drive questionnaire, a training presentation, a familiarization drive, wellness questionnaires to screen for simulator sickness, a study drive, in-drive ratings of trust, and a post-drive questionnaire and risk-propensity assessment. During the simulator drives, one member of the pair will perform a continuous drive along a specified route. The other member of the pair will complete three short drives where they interact with the other participant at specific points throughout the drive. The simulator will collect vehicle data (e.g., brake inputs, steering wheel angle) and data about the surrounding environment (e.g., distance to surrounding vehicles and lane markings). After the drives, participants will complete a questionnaire to assess their understanding of the DAS and their trust in and acceptance of the DAS. Data will be analyzed to understand how human drivers interact with DAS in mixed traffic situations and to understand how humans understand and perceive automation in different situations.

In the second study, participants will complete a drive in a driving simulator with a driving automation system. The study drive will contain situations to which the DAS must respond. Participants will be randomly assigned to one of three systems with different capability, defined by how well the automation can navigate the set of test situations with respect to DAS ability (high, medium, or low ability defined by accuracy and reliability). The simulator will collect vehicle data (e.g., brake inputs, steering wheel angle) and data about the surrounding environment (e.g., distance to surrounding vehicles and lane markings). After the drives, participants will complete a questionnaire to assess their understanding of the DAS and their trust in and acceptance of the DAS as well as a risk-propensity assessment. Data will be analyzed to understand how human drivers interact with DAS in mixed traffic situations and to understand how humans understand and perceive automation in different situations.

In the third study, participants will complete a drive in a driving simulator with a driving automation system. The study drive will contain situations to which the DAS must respond. Participants will be randomly assigned to one of three systems with different capability, defined by how well the automation can navigate the set of test situations with respect to DAS decision-making (risky, moderate, or conservative risk tolerance). Outside of this, study procedures are the same as those for the second study.

Part A. Justification

1. Circumstances That Make The Collection Of Information Necessary. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection. Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.

Subchapter V of Chapter 301 of Title 49 of the United States Code (U.S.C.) authorizes the Secretary of Transportation to conduct “motor vehicle safety research, development, and testing programs and activities, including activities related to new and emerging technologies that impact or may impact motor vehicle safety.” 49 U.S.C. 30182. Pursuant to Section 1.95 of Title 49 of the Code of Federal Regulations (CFR), the Secretary has delegated this authority to the National Highway Traffic Safety Administration (NHTSA).

NHTSA’s mission is to save lives, prevent injuries, and reduce the economic costs of road traffic crashes through education, research, safety standards, and enforcement activity. As automated vehicle technologies advance, they have the potential to dramatically reduce the loss of life each day in roadway crashes. Alternatively, the systems may not reach this potential or could potentially decrease safety when drivers do not understand how to safely interact with the systems or do not understand the capabilities and limitations. This research supports NHTSA’s mission by examining how drivers interact with driving automation systems, particularly in mixed traffic situations that include a combination of automated vehicles and manually driven vehicles.

The information collection components for initial research and the information desired are listed below. Information collection tools for all three studies in the project are listed. Note that participants will only complete one of the three studies. Driving behavior, pre-drive questionnaire, and post-drive questionnaire responses will be combined for analysis.

  1. Eligibility Questionnaire (NHTSA Form 1742) – Necessary for determining individuals’ suitability for study performance based on driving experience, history, general health, and ability to safely drive in the simulator without health concerns. The Eligibility Questionnaire will solely be used to determine individuals’ suitability for study participation and will not be analyzed in any way.

  2. Informed Consent Document Study 1 (NHTSA Form 1743) – Necessary for obtaining informed written consent from the participant to participate in the study. The form describes all study procedures, data storage and use, and potential risks from the study.

  3. Informed Consent Document Study 2 (NHTSA Form 1744) – Necessary for obtaining informed written consent from the participant to participate in the study. The form describes all study procedures, data storage and use, and potential risks from the study.

  4. Informed Consent Document Study 3 (NHTSA Form 1745) – Necessary for obtaining informed written consent from the participant to participate in the study. The form describes all study procedures, data storage and use, and potential risks from the study.

  5. Pre-Drive Questionnaire (NHTSA Form 1746) – Necessary for collecting data used to measure participants’ understanding (i.e., mental model) of DAS and their pre-drive trust in the DAS. Collecting these data before and after the drives will let us measure how exposure to the DAS impacts understanding and trust. Demographic information (e.g., age, sex, gender, race, ethnicity) will also be collected. This pre-drive questionnaire will remain the same across all three studies.

  6. Wellness Questionnaire (NHTSA Form 1747) – Necessary for evaluating simulator sickness symptoms to determine individuals’ ability to complete the study drive in the driving simulator. This questionnaire will be administered pre-drive (to obtain baseline ratings), after the familiarization drive, and after the study drive. This wellness questionnaire will remain the same across all three studies.

  7. Driving Behavior Assessment – This information collection includes a pre-drive PowerPoint training, a familiarization drive, an in-drive questionnaire (NHTSA Form 1748), and the study drive. Before the study drive, participants will complete training via a PowerPoint presentation on a computer in a private study room. The presentation will introduce the simulator, the familiarization and study drive procedures, the DAS, and the non-driving email task. The familiarization drive is necessary to acclimate the participant to the driving simulator and perform a real-time determination for simulator sickness while training the participant on how to use the driving automation system. The study drive is necessary for gathering driving performance information for the purpose of assessing how drivers interact with automated systems and the impact of these interactions on safety. During the study drive, an in-drive questionnaire will be administered. This is necessary for understanding drivers’ trust in the DAS at various points during the study drive. In Study 1, this information is collected after the events where the pair of research participants interact with one another. In Studies 2 & 3, this information is collected after the four events where the behavior of the automation varies across the different conditions. The information will be used to measure trust in the DAS following specific events. These questions will remain the same across all three studies.

  8. Post-Drive Questionnaire (NHTSA Form 1749) – Necessary for collecting data used to measure participants’ understanding (i.e., mental model) of DAS and their post-drive trust in the DAS, as well as general risk-taking behavior while driving. This post-drive questionnaire will remain the same across all three studies.

  9. Balloon Analogue Risk Task (BART)—Necessary for measuring objective risk-taking propensity. For this computerized task, participants are presented with 20 different balloons (20 trials), and told that “the actual number of pumps for any particular balloon will vary.” Participants are instructed to attempt to earn as many points as possible. At the beginning of each trial, the participant decides how many pumps they thought the balloon would hold and input this number. Each balloon inflates for 3 seconds and then either pops or stays intact depending on whether the participant’s wager was above or below the predetermined explosion point for that balloon. If the balloon is pumped past its explosion point, it will pop, and the participant earns no points for that balloon. If the balloon is not pumped past the explosion point, the participant keeps the number of pumps as points. After each outcome, a new deflated balloon appears on the screen and points earned will be added to the total. Each balloon could earn a maximum of 128 points with an explosion point equally likely to occur on any given pump participant to the constraint that within each sequence of 10 balloons the average explosion point was on pump 64. The task will remain the same across the three studies and is a standardized online tool.


2. How, By Whom, And For What Purpose Is The Information To Be Used. Indicate how, by whom, and for what purpose is the information is to be used. Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.

BY WHOM: The University of Iowa DSRI will conduct this new data collection effort under a task order on an Indefinite Deliverable Indefinite Quantity contract with NHTSA. This information collection will support the effort to learn how humans interact with driving automation systems (DAS), particularly in the context of mixed traffic interactions where some vehicles have DAS and others do not. Within mixed traffic environments, vehicles may also have DAS that perform more or less of the driving task (i.e., different levels of automation) and come with their own sets of expectations and limitations. Such mixed traffic environments will soon be common on roads, as manufacturers deploy new vehicles with a range of automated capabilities. To date, research has focused on understanding how drivers interact with technology in their vehicle, but we know little about how drivers interact with other vehicles and how technology impacts those interactions. This project will generate data to inform NHTSA’s understanding of driver behavior and potential safety considerations in these mixed traffic environments and is not immediately intended to inform regulations or policy. Study participation will be voluntary and solicited via email advertisements. Data will be collected and analyzed by the research team at the University of Iowa DSRI.

This research will be conducted over three studies and involves nine components, including three variations of the consent (one per study), four questionnaires, a driving behavior assessment (training and practice, in-drive questionnaire, and collection of driving performance), and one computerized assessment.

Eligibility Questionnaire (NHTSA Form 1742)

PURPOSE: For determining individuals’ willingness to participate in the study and individuals’ suitability for study participation based on driving experience and history, demographics (age, sex), general health, and ability to safely drive in the simulator without health concerns. The same criteria apply across all three studies.

HOW: Interested individuals can complete the Eligibility Questionnaire online via REDCap link (Research Electronic Data Capture; https://www.project-redcap.org/). This link is included on all recruitment materials (emails or advertisements) and websites. Form display and branching logic will ensure respondents see the minimal number of questions required to determine eligibility by ending the questionnaire early if criteria are not met at different points in time. Respondents will provide contact information at the end of the form only if they appear to meet study criteria. After an email address is provided, the respondent will receive an automatic reply thanking them for their response with a copy of the appropriate informed consent document attached for their review. Prospective respondents will be identified using the DSRI registry. An IRB-approved email will be sent to potential participants based on a registry query of age. Potential participants also will have access to websites that can be used to show their interest: drivingstudies.com and dsri.uiowa.edu. Both sites will have an IRB-approved website script that matches the language of the recruitment email and will include the link to the online REDCap eligibility questionnaire. No information is collected on the websites. The approved email also will be sent to UI faculty, staff, and students via the UI mass email system. Past experience has demonstrated that as recruitment begins, those contacted via the DSRI registry email or UI mass email tend to forward this email to other individuals or tell others about the study through word of mouth. Members of the DSRI registry primarily reside in Iowa and Illinois, though there are individuals from as far away as California. Being affiliated with a university also gives DSRI access to international students, staff, and faculty which will help provide diversity in race, ethnicity, and cultural perspectives in the study sample.

Response data will be reviewed by the research team and a final determination will be made whether the individual meets the initial study participation criteria. Those meeting criteria will be scheduled for a study appointment.

Informed Consent (NHTSA Forms 1743, 1744, 1745)

PURPOSE: Ensures individuals have an informed choice about whether to participate in each of the three studies. The form differs slightly for each study.

HOW: After submitting a response to the Eligibility Questionnaire, participants are emailed a copy of the informed consent for review before enrollment. At the study visit, participants will be escorted to a private room upon arrival at DSRI for the consent process. Each version of the consent is set up electronically via the REDCap e-consent platform. Participants will undergo the consent process with a member of the research using a desktop computer or tablet. After electronically signing the consent document, the participant will be able to download or be emailed a copy of the signed form.

Pre-Drive Questionnaire (NHTSA Form 1746)

PURPOSE: To measure participants’ understanding (i.e., mental model) of DAS and their pre-drive trust in the DAS. Collecting these data before and after the drives will let us measure how exposure to the DAS impacts understanding and trust. Demographic information (e.g., age, sex, gender, race, ethnicity) will also be collected. This pre-drive questionnaire will remain the same across all three studies.

HOW: The pre-drive questionnaire will be administered via the REDCap platform using a tablet or desktop computer in a private room prior to entering the simulator. Participants’ responses to scale-based questions will be collated for analysis, with descriptive summaries and inferential statistics for pre-drive and post-drive questionnaires.

Wellness Questionnaire (NHTSA Form 1747)

PURPOSE: To ensure subjects are feeling well enough to drive by screening out those with a propensity for simulator sickness. This questionnaire will be administered pre-drive (to obtain baseline ratings), after the familiarization drive, and after the study drive. This questionnaire will remain the same across all three studies.

HOW: The wellness questionnaire will be administered via the REDCap platform using a tablet or desktop computer in a private room prior to entering the simulator and on a tablet in the simulator after the familiarization and study drives. Pre-defined score thresholds must be met for participants to continue to the study drive after the familiarization drive.

Driving Behavior Assessment (Pre-Drive PowerPoint Training, Familiarization Drive, Study Drive with In-Drive Questionnaire (NHTSA Form 1748))

PURPOSE: Before the study drive, participants will complete training via PowerPoint presentation to introduce them to the simulator, the familiarization and study drive procedures, the DAS, and the non-driving email task. Participants are also reminded that the system being tested is our interpretation of a system that may or may not be available on roadways currently. The familiarization drive is used to acclimate the participant to the driving simulator and screen for simulator sickness while training the participant on how to use the driving automation system. The study drive is used to gather driving performance information for the purpose of assessing how drivers interact with automated systems and the impact of these interactions on safety. The in-drive questionnaire is used to understand drivers’ trust in the DAS at various points during the study drive. These questions will remain the same across all three studies.

HOW: Study participants will view the presentation on a desktop computer in a private room. A research team member will be available to answer questions and demonstrate use of the email task. Study participants will complete this familiarization drive in either the NADS-1 or NADS-2 simulator, as appropriate for study and assigned condition. The drive is largely the same for all participants, though the training on the driving automation system will differ by study and condition. Study participants will complete the 35-minute study drive in either the NADS-1 or NADS-2 simulator, as appropriate for study and assigned condition. The drive will differ based on study and assigned condition, though overall length and general experience will be the same across participants. The in-drive questionnaire will either be administered on a tablet via the REDCap platform or built into the simulator’s in-vehicle touch console. If this proves infeasible, the questionnaire will be administered verbally by a researcher. In Study 1, this information is collected after the events where the pair of research participants interact with one another. In Studies 2 & 3, this information is collected after the four events where the behavior of the automation varies across the different conditions. The information will be used to measure trust in the DAS following specific events. Participants’ responses to scale-based questions will be collated for analysis, with descriptive summaries and inferential statistics.

Post-Drive Questionnaire (NHTSA Form 1749)

PURPOSE: To measure participants’ understanding (i.e., mental model) of DAS and their post-drive trust in the DAS. Collecting these data before and after the drives will let us measure how exposure to the DAS impacts understanding and trust. General risk-taking behavior while driving will also be collected. This post-drive questionnaire will remain the same across all three studies. After completion of this questionnaire, participants are debriefed about their specific system and capabilities and reminded this may not be reflective of currently available systems.

HOW: The post-drive questionnaire will be administered via the REDCap platform using a tablet or desktop computer in a private room after exiting the simulator. Participants’ responses to scale-based questions will be combined for analysis. Responses to open-ended questions will be qualitatively summarized and described in the technical report without reference to individual participants. The debrief component is a verbal conversation between the participant and a research team member.


3. Extent of Automated Information Collection. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g. permitting electronic submission of responses, and the basis for the decision for adopting this means of collection. Also, describe any consideration of using information technology to reduce burden.

The DSRI subject registry, an electronic database of approximately 7,000 individuals who have participated in previous studies or expressed interest in participating, will be used to recruit participants. Participants will be recruited by email but can also call a research team member if preferred.

Questionnaire data will be collected electronically via REDCap (https://redcap.icts.uiowa.edu/redcap/). The eligibility questionnaire is completed online via REDCap link emailed to potential participants and can be completed at the respondent’s convenience; the respondent can leave and come back to the questionnaire if desired should something interrupt them. Additionally, branching and display logic used in the questionnaire reduce the need for respondents to skip questions on their own if they don’t apply. Online eligibility screening was adopted to allow for flexibility for respondents and has been very well received in previous studies conducted at DSRI. The BART will be done electronically on a computer.

Video and engineering data from the simulator are recorded and backed up automatically. Computer programs (MatLab and R) will be used to reduce simulator data to summary measures (e.g., means, standard deviations) and to perform statistical analyses and generate results figures.

4. Describe Efforts to Identify Duplication. Describe efforts to identify duplication. Show specifically why any similar information already available cannot be used or modified for use for the purposes described in item 2 above.

NHTSA has not conducted or sponsored a similar study of drivers’ interactions with driving automation systems in mixed traffic environments. We are not aware of any research of this nature having been conducted with respect to connected simulation and mixed traffic environments. This research will add to the state of knowledge and is not immediately intended to inform regulations or policy.



The experiments will provide information that does not currently exist regarding how drivers interact with driving automation systems in mixed traffic environments. Data collection using a connected simulator environment, where two research participants interact in the same virtual space, is necessary to study these interactions in a safe and controlled manner and information collected in this project cannot be obtained through other methods.

5. Efforts to Minimize the Burden on Small Businesses. If the collection of information impacts small businesses or other small entities, describe any methods used to minimize burden.

This collection of information will not affect small businesses or other small entities. Respondents are individuals who meet certain criteria and who volunteer for the study.

6. Impact of Less Frequent Collection Of Information. Describe the consequence to federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles to reducing burden.

The information collection covered herein will be collected once only.

As the agency responsible for prescribing and maintaining the standards for vehicle safety in the United States,2 NHTSA is constantly seeking objective data for use in basing decisions about how to best protect the road-traveling public and minimize deaths and injuries associated with car crashes.3 Timely, accurate information on driver behavior and performance considering modern-day vehicle equipment and driver habits is essential to NHTSA’s determining the most appropriate recommendations and requirements for vehicle equipment and driving safety. With regard to the topic of driving automation systems, their rapid deployment across passenger vehicle fleets makes it critical for NHTSA to understand the safety impacts of driver interactions with the systems in mixed traffic environments.

7. Special Circumstances. Explain any special circumstances that would cause an information collection to be conducted in a manner:

  • Requiring respondents to report information to the agency more often than quarterly;

  • requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;

  • requiring respondents to submit more than an original and two copies of any document;

  • requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records for more than three years;

  • in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;

  • requiring the use of a statistical data classification that has not been reviewed and approved by OMB;

  • that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or

  • requiring respondents to submit proprietary trade secret, or other confidential information unless the agency can demonstrate that it has instituted procedures to protect the information's confidentiality to the extent permitted by law.

If one or more of the following applies, please explain in complete detail.

There are no special circumstances that require the collection to be conducted in a manner inconsistent with 5 CFR 1320.5(d)(2).

8. Compliance With 5 CFR 1320.8(d). If applicable, provide a copy and identify the date and page number of publication in the federal register of the agency's notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and describe actions taken by the agency in response to those comments. Specifically address comments received on cost and hour burden. Describe efforts to consult with persons outside the agency to obtain their views.

In compliance with 5 CFR 1320.8(d), NHTSA published a notice in the Federal Register with a 60-day public comment period to announce this proposed information collection on December 12, 2023 (88 FR 86202).

During the public comment period for the 60-day notice, NHTSA received detailed comments from the Insurance Institute for Highway Safety (IIHS). NHTSA is grateful for the thorough and detailed review of the 60-day notice and the time and attention IIHS has given to ensure appropriate study parameters have been and will be taken into account. IIHS did not disagree with the sampling methodology or size, the design of the study or order of carryout, or the time or costs associated with the collection, so therefore no changes will be made to the study design or sampling methodology. The comments do not affect the burden estimates and therefore no changes will be made to the burden calculations.

Given the thorough review from IIHS, we have addressed each comment in turn:

Comment: The Insurance Institute for Highway Safety (IIHS) thanks the National Highway Traffic Safety Administration (NHTSA) for the opportunity to comment on this study series proposal, Human Interaction With Driving Automation Systems. Machines behave differently than human drivers, even when the driving automation still requires a human to be involved. With most automakers offering at least partially automated systems in many models of their vehicle lineups, the relevance of these research questions is only growing. It is important to understand how people interact not only with the technologies in their own vehicles, but also with other vehicles equipped with driving automation systems. Response: We agree with IIHS about the importance of understanding human-automation interactions in mixed traffic contexts. We are glad for confirmation that this project targets an important emerging topic, and we thank IIHS for their thoughtful and constructive feedback on this project design.

Comment: We commend NHTSA’s sampling approach to target licensed members of the public from a wide age range in order to make generalizable conclusions from the data. While it is reasonable to expect age and overall driving-experience (e.g., in terms of years licensed) effects, there is an additional participant factor missing from the sample characteristics listed. It is unclear what levels of driving automation the study series will investigate; however, IIHS research has shown that having experience with partial driving automation (Level 2) affects driving behavior while using the technology (Mueller et al., 2022). Although Level 3 driving automation is exceedingly rare, some vehicles for sale today are nevertheless equipped with it in, for example, the United States and Japan. Level 4 driving automation is not available for private consumer purchase, but it is available in ride-hailing fleets (e.g., Waymo ride-hailing services in Phoenix, AZ, and San Francisco, CA). Therefore, we recommend factoring experience, both as a driver and as a passenger where applicable, with each level of driving automation tested in the sampling approach and/or data analysis. Response: We thank IIHS for this suggestion and agree about the importance of considering experience with automation in this project. We plan to collect information regarding participants’ experience with automation, their understanding (i.e., mental model) of automation, and their trust in vehicle automation technologies. While we do not plan to include experience as a variable in our study design, we will be able to use the information collected to gain insight into differences in human-automation interaction based on prior experience and understanding.

Comment: The research proposal indicates that vehicle kinematics will be measured, which are fundamental for gauging participants’ ability to control the vehicle in these scenarios; however, there are other behaviors that reflect higher levels of cognition that ought to be considered too. Reactive and proactive changes in behavior around object detection, trip planning, and navigation updating are important safety-related indicators of how people interact with their vehicles. Experimental manipulation of the simulated driving scenarios could be used to objectively evaluate different levels of situational awareness of the surrounding traffic and wayfinding ability and accuracy. Response: We thank IIHS for this comment. We agree that variables other than vehicle kinematics should be considered in measuring driver behavior in the test scenarios. To that end, we will collect information about driver glance behavior and visual attention from eye tracking in the simulator. We will also collect video data of both the driver and driving environment, such that we can code and understand how drivers respond to events involving vehicle automation. We will incorporate this feedback to also consider proactive changes in behavior, such as environmental scanning and latent hazard detection. We agree that trip planning and navigation may also yield valuable information from human-automation interactions, but these tasks are more difficult to replicate in the simulator and fall outside the scope of this project.

Comment: We recommend that NHTSA also measure behind-the-wheel behavior, such as gaze and hand activity, because where the driver is looking and what their hands are doing will affect other behavior related to vehicle control. Moreover, secondary activity, both driving-related and non-driving-related, is a normal phenomenon in driving with and without automation support. Even if these studies exclude non-driving-related activities, participants will still have to interact with the interfaces to operate and understand the driving automation. These driving-related secondary activities include glances to and physical interaction with interior displays (e.g., instrument panels) and operating steering wheel controls to activate different types of system support. These activities are considered secondary because they are not involved in the immediate physical control of the vehicle, such as steering and accelerator/brake pedal use. In some cases, driving-related secondary activities may affect a driver’s ability to control the vehicle if they occupy the driver’s attention for too long. Their inclusion in the set of dependent variables will be important for understanding differences between participants and any changes in vehicle-kinematic behavior in the different driving scenarios. Response: We thank IIHS for this feedback and we completely agree with the suggestion. Our plan is to examine gaze, hand, and foot behavior during the study events. Previous work shows the importance of understanding (dis)engagement beyond looking at system status or takeover time. In this project, we plan to include different combinations of driving-related and non-driving-related secondary tasks (NDRTs). As IIHS suggests, we plan to examine driver interactions with automated vehicle interfaces, particularly in windows where automation encounters edge case or challenging situations in the study drives. The second and third studies will also include NDRTs and our analyses will consider outcomes such as attentional shifts between NDRTs and driving (or monitoring) as an outcome of different automated vehicle behaviors.

Comment: Related to this, we recommend paying close attention to the driver management strategies incorporated in the design of the simulated vehicle, if they apply depending on the types of driving automation that will be simulated in these studies. For systems that require the driver to be involved in the operation of the vehicle (Levels 0 to 3), design factors around driver monitoring, attention reminders, and last-resort countermeasures should be considered as they will shape the observable behind-the-wheel behavior, physical vehicle control, and interactions with the simulated vehicle’s interfaces. Response: We completely agree with this comment about the importance of considering driver management strategies in the design of the studies. To the extent possible, we will include management strategies that are representative of production or near-production systems. We will also include methods to set appropriate levels of expectation in our sample of drivers about the management strategy being used and the expectations for both the driver and the automated system.

Comment: Furthermore, the design philosophies currently behind Level 0 to 3 systems in production vary considerably among manufacturers so as to produce unique relationships between their customers and the technologies in their vehicles. As such, no two systems of a given level of driving automation should be considered the same. Even when their lane keeping and speed and distance management designs are similar, systems under the same driving automation domain can still be designed with fundamentally different driver-vehicle interactions. For instance, some systems require that the driver’s hands be on the wheel at all times whereas others permit periods of hands-free operation, and some systems are designed with cooperative lane-centering support that requires the driver to stay regularly involved in the steering of the vehicle. These factors may produce confounds in the data if they are not considered in the design of the simulated systems under test. Response: We agree with IIHS that the design of currently deployed automated systems varies considerably, and these design differences almost certainly have an impact on driver interactions. Our approach for the project will be to create systems in the simulator that are strong representations of some of the available technologies, understanding that other system designs could yield different driver-system interactions. Throughout our reporting on the project, we will clearly specify what design(s) our simulated system intends to replicate, what differences may exist, and the differences that exist from other systems not included in the simulator studies that are currently classified as within the same levels of automation. We will make clear what conclusions can and cannot be drawn about system design characteristics and be careful to avoid making general conclusions about a level of automation or type of automated system when there is variability in design that cannot be fully captured within the scope of the project.

Comment: Programming how the simulated vehicle responds to different traffic conflicts or ambiguous driving scenarios in the study series will have ramifications on participant behavior. The realism of disruptions in system performance matters, both in terms of a sudden cessation of support as well as inappropriate system behavior—the specifics around which depend on the simulated level of driving automation and the driving scenarios tested. Assuming these studies will simulate driving automation that has been implemented in the registered vehicle fleet or the commercially available ride-hailing fleet, if care is not taken to ensure those disruptions are realistic and conform with what is technically possible and likely, using what is known based on current implementations, they may affect participant behavior in ways that are outside the scope of the research and thus limit the generalizability of the findings. Response: We very much agree with the point that the situations and automated vehicle behaviors studied in this project need to match real-world situations and systems as closely as possible. We will use all available information to design the study scenarios to be representative of situations automation might encounter and might reasonably fall within a system’s operational design domain. We will review information about available automated vehicle systems and make sure that the design of our study is consistent with the design of the systems.

NHTSA published a notice in the Federal Register on June 11, 2024 (89 FR 49268) with a 30-day public comment period to announce that NHTSA intended to forward the request for the proposed information collection to OMB.


9. Payment Or Gifts To Respondents. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.

For the Eligibility Questionnaire, no payment or gift will be provided to respondents.

For remaining collections and procedures, NHTSA plans to provide monetary payment at a rate of $36 per hour for study participation. Such compensation is consistent with normal experimental practice to compensate participants for their time and encourage participation in research studies. According to the Midwest Information Office of the Bureau of Labor Statistics (https://www.bls.gov/regions/midwest/news-release/occupationalemploymentandwages_iowacity.htm), the local average hourly rate across occupations is roughly $27, and the highest hourly rate is roughly $46.50. We determined that $36 per hour is appropriate to capture a wider range of participants. Recruitment of non-college aged individuals should be competitive with hourly rates found in a college town without payment being coercive while also high enough to draw in professionals that make more than the average wage. This rate of pay is comparable to the current national average hourly rate and is also consistent with our experience recruiting study participants. The amount of compensation offered covers typical costs incurred such as travel. Our facility does not charge for parking and the nearest bus is free fare.

The compensation rate was reviewed and approved by an independent Institutional Review Board.

10. Assurance of Confidentiality. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy. If the collection requires a systems of records notice (SORN) or privacy impact assessment (PIA), those should be cited and described here.

All data will be treated with sensitivity and security considerations commensurate with its level of confidential content. The University of Iowa Institutional Review Board (IRB) will review all instruments, informed consent materials, and procedures to ensure that the rights of individuals participating in the study are safeguarded before recruitment for the study can begin. The IRB is a specially constituted review body established to protect the welfare of human subjects recruited to participate in research.

As stated in the informed consent forms (NHTSA Forms 1743, 1744, 1745), no individual results or personal information will be published. Published documents will provide only summary statistics that cannot be used to identify an individual. Links between individual names and study numbers will be securely stored according to the provisions of the University of Iowa Institutional Review Board. This identifying information will not be transmitted to NHTSA.

There is an adjudicated Privacy Threshold Assessment (PTA) for this project.

11. Justification for Collection of Sensitive Information. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private. This justification should include the reasons why the agency considers the questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.

All recruitment screening questions are prefaced with a statement clarifying that responding is voluntary and that information will only be used for the purposes of study recruitment. We inquire about general health history as a safety precaution to screen for conditions that could prove dangerous to the individual or a research staff member. The responses to these health history questions are not recorded or reported on in the final dataset but are used as a screening tool.

The Eligibility Questionnaire involves questions that some individuals may deem sensitive but are medically necessary. These questions are used to ensure that individuals meet study eligibility requirements prior to their enrollment. Branching and display logic used in the questionnaire screen out individuals in blocks to allow for the fewest number of sensitive questions asked to determine eligibility. Age, sex, and gender information will be collected to assign participants to the experimental conditions. While efforts will be made to balance the assignment of participants by age and sex, there may be some imbalances in order to be inclusive of those who do not identify on the binary and as a result of differences in how sex may be identified on drivers’ licenses across States. Some screening questions address topics that are commonly considered private, such as general health information that may affect driving ability or ability to operate the simulator.

Health-related questions are posed in the Eligibility Questionnaire to ensure that the drivers could be considered of average driving ability, are healthy enough to safely participate in an experimental protocol, are not impaired in any way, and have no episodic health conditions that could manifest during their participation (e.g., seizure). Individuals will be asked whether they are taking any medications that may cause drowsiness or have been taken for less than 6 months (since the effect on the individual may not yet be known). Health information will be collected with yes/no questions. For responses of “yes”, follow-up questions may be asked if we only are screening out for a certain time frame or set of complications. The information for all respondents will be kept through recruitment until data collection is complete. The information is deleted once data collection is confirmed as complete. For individuals who did not meet criteria, name and contact information is not collected so sensitive information is not tied to an individual. Data is collected via REDCap and securely stored on servers hosted by the University of Iowa Institute for Clinical & Translational Science (ICTS), who hosts the platform for the University of Iowa.

12. Estimate of Burden Hours for Information Requested. Provide estimates of the hour burden of the collection of information. The statement should:

  • Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated. Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates. Consultation with a sample (fewer than 10) of potential respondents is desirable. If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance. Generally, estimates should not include burden hours for customary and usual business practices.

  • If this request for approval covers more than one form, provide separate hour burden estimates for each form and aggregate the hour burdens.

  • Provide estimates of annualized cost to respondents for the hour burdens for collections of information, identifying and using appropriate wage rate categories. The cost of contracting out or paying outside parties for information collection activities should not be included here. Instead, this cost should be included in item 13.

The burden estimate includes time for driving sessions, including time for simulator loading and positioning, participant training, and the time for reviewing instructions. The anticipated number of individuals initiating the Eligibility Questionnaire is 700 and Pre-Drive Questionnaire, Wellness Questionnaire, Post-Drive Questionnaire, and the Driving Behavior Assessment is 300. For calculations, all data is rounded up to achieve maximums.

Completion of the Eligibility Questionnaire is estimated to take approximately 15 minutes, with 700 individuals expected to start the questionnaire and 400 respondents expected to make it to the end and thus be eligible for participation. We estimate that the 300 respondents that do not complete the Eligibility Questionnaire will spend approximately 5 minutes before they stop. Averaging across the 700 respondents, the burden for the Eligibility Questionnaire is 10.7 minutes, rounded to 11 minutes. The 400 that complete the questionnaire will be further screened by researchers to achieve 300 respondents to select for the study.

Three hundred respondents should be sufficient to obtain the necessary number of participants within each of the three studies to detect differences between study conditions and account for attrition due to issues such as simulator sickness and still achieve the statistical power necessary (a minimum of 244 respondents is necessary to achieve the power). In Study 1, since participants complete in pairs, if one member of the pair is lost due to simulator sickness or a technical issue, both members of the pair are lost. Study 1 will recruit 180 participants, with 90 pairs, to complete 128 participants (64 pairs). Study 2 will recruit 60 participants to complete 48 participants, and Study 3 will recruit 60 participants to complete 48 participants. Burden calculations assume no attrition throughout the visit to allow for the highest possible burden since we cannot know when participants may discontinue the protocol.



Participation in Study 1, Study 2, and Study 3 each involves 1 study appointment. Each study visit includes:

  • The Informed Consent Document for the respective study. While each study has a separate informed consent document, each one is expected to take 20 minutes for completion. One hundred eighty respondents will complete the consent form for Study 1, 60 for Study 2, and 60 for Study 3.

  • The Pre-Drive Questionnaire is expected to take 15 minutes. All 300 respondents across the three studies will take this pre-drive questionnaire once.

  • The Wellness Questionnaire is expected to take 5 minutes to complete. Each of the 300 respondents across the three studies will complete this questionnaire three times.

  • The Driving Behavior Assessment includes the pre-drive PowerPoint training, familiarization drive, study drive, and an in-drive questionnaire. The pre-drive PowerPoint training, familiarization drive, and study drive require no collection of information from the respondent but are included for time burden. The PowerPoint training, familiarization drive, and study drive are expected to take the same time across the three studies. The respondents in each of the three studies will complete an In-Drive Questionnaire, which is collection of information and included as a form. The In-Drive Questionnaire is consistent across the three studies thus taking the same amount of time for completion. For each of the 300 respondents, the Driving Behavior Assessment is expected to take 80 minutes for completion.

  • The Post-Drive Questionnaire is estimated to take 20 minutes and will be administered once to each of the 300 respondents across the three studies.

  • The Balloon Analogue Risk Task (BART) is expected to take five minutes and all 300 respondents will complete this task once.

To calculate the opportunity cost associated with the forms and other relevant activities necessary for this collection of new information, NHTSA looked at average hourly earnings for employees on private nonfarm payrolls. NHTSA estimated the total opportunity costs associated with these burden hours by looking at the average wage for total private employees on private nonfarm payrolls. The Bureau of Labor Statistics (BLS) estimates that the average hourly wage for this group is $33.82, thus serving as the opportunity cost per hour. NHTSA estimates the total opportunity cost associated with the 903 burden hours to be $30,551. Annual burden cost is estimated to be $10,181 and annual burden hours is estimated to be 301. There may be a slight variation in the comparison of total to annual burden over the three years due to rounding. The annual burden figures will be those represented in ROCIS.


NHSTA Form No.

Information Collection

Number of Respondents

Total/

Annual

Time per response (min)

Cost per Response*

Frequency of Response

Opportunity Burden (hours)

Total/

Annual

Burden Cost (dollars)

Total/

Annual

1742

Eligibility questionnaire

700/

233

11

$6.20

1

128/

43

$4,340/

$1,445

1743

Informed Consent, Study 1

180/

60

20

$11.27

1

60/

20

$2,029/

$676

1744

Informed Consent, Study 2

60/

20

20

$11.27

1

20/

7

$676/

$225

1745

Informed Consent, Study 3

60/

20

20

$11.27

1

20/

7

$676/

$225

1746

Pre-Drive Questionnaire

300/

100

15

$8.46

1

75/

25

$2,538/

$846

1747

Wellness Questionnaire

300/

100

5

$2.82

3

75/

25

$2,538/

$846

1748

Driving Behavior Assessment
(Pre-Drive PowerPoint Training, Familiarization Drive, Study Drive with
In-Drive Questionnaire)

300/

100

80

$45.09

1

400/

133

$13,527/

$4,509

1749

Post-Drive Questionnaire

300/

100

20

$11.27

1

100/

33

$3,381/

$1,127

-

Balloon Analogue Risk Task

300/

100

5

$2.82

1

25/

8

$846/

$282


Total Burden/

Annual Burden





903/

301

$30,551/

$10,181

* See Table B-3 Average hourly and weekly earnings of all employees on private nonfarm payrolls by industry sector, seasonally adjusted, for August 2023, available at https://www.bls.gov/news.release/empsit.t19.htm (accessed October 3, 2023). See Table 1. Employer Costs for Employee Compensation by ownership (June 2023), available at https://www.bls.gov/news.release/ecec.t01.htm (accessed October 3, 2023).

13. Estimate Of The Total Annual Costs Burden. Provide an estimate of the total annual cost burden to

respondents or record keepers resulting from the collection of information.

  • The cost estimates should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life); and (b) a total operation and maintenance and purchase of services component. The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information. Include descriptions of methods used to estimate major costs factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred. Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.

  • If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance. The cost of purchasing or contracting out information collection services should be a part of this cost burden estimate. In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.

Generally, estimates should not include purchases of equipment or services, or portions thereof, made (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.

There will be no start-up or record-keeping costs to subjects to obtain these data. Participation is voluntary and nobody will be required to participate. Participants will be compensated for their time and effort for participating in the study procedures, so there will be no costs for participating.

The only cost burdens respondents may incur are those related to travel to and from the study location for those that participate, which is estimated not to exceed approximately $39.30 (based on the standard mileage rate for business-related driving in 2023 and a round trip distance of 60 miles). The costs are minimal and are expected to be offset by the monetary compensation that will be provided to all research participants who enroll.

14. Estimates Of Costs To The Federal Government. Provide estimates of annualized cost to the federal government. Also, provide a description of the method used to estimate costs, which should include quantification of hours, operational expenses such as equipment, overhead, printing, and support staff, and any other expense that would not have been incurred without this collection of information.

The total estimated cost to the federal government is $1,650,521.63. The annualized cost to the federal government is $550,173.88. The cost is comprised of the following:

The estimated cost in terms of government time is approximately 480 hours for the Contracting Officer’s Representative (COR) and 24 hours for the supervisor. Using an example COR GS pay scale level of GS-14 Step 1 and Supervisor GS pay scale rate of GS-15 Step 1, NHTSA estimates that the cost associated with those hours to be $33,944.64 ($66.79 x 480 hours = $32,059.20; $78.56 x 24 hours = $1,885.44; $32,059.20 + $1,885.44 = $33,944.64). (GS locality WASHINGTON-BALTIMORE-ARLINGTON and pay rates for year 2024.)

The estimated costs incurred by the Federal government relating to the administration and technical support for this information collection are based on the number of minutes needed to administer and process each question set and the number of respondents. The estimated costs incurred by the Federal Government relating to the administration and technical support for the project is $1,616,576.99. This number includes the value for participant compensation. For each study visit, NHTSA plans to provide monetary payment at the rate of $36 per hour for study participation. Participants are expected to spend approximately 2.5 hours at DSRI. With an anticipated N of 300, multiplied by 2.5 hours at a rate of $36/hr, this comes to a total cost of $27,000.

Direct Labor

Rate ($)

Hours

Amount ($)

Associate Researchers

85.61

1352

115,744.72

Assistant Researchers

48.51

9596

465,501.96

Simulator Operators

55.83

2496

139,351.68

Program Manager

127.54

64

8,162.56

Total Direct Labor


13508

728,760.92

Participant Compensation



27,000.00

Computer Charges



46,208.95

Project Specific Supplies



7,287.70

Total Direct Costs



80,496.65

Indirect Costs



437,235.26

Simulator Usage Fees



370,084.16

TOTAL COSTS



1,616,576.99





15. Explanation Of The Program Change Or Adjustments. Explain the reasons for any program changes or adjustments reported on the burden worksheet. If this is a new collection, the program change will be entire burden cost and number of burden hours reported in response to questions 12 and 13. If this is a renewal or reinstatement, the change is the difference between the new burden estimates and the burden estimates from the last OMB approval.

The collection of this information is associated with a new project. As such it requires a program change to add the estimated 950 hours for the new collection to NHTSA’s existing burden.

16. Publication Of Results Of Data Collection. For collections of information whose results will be published, outline plans for tabulation, and publication. Address any complex analytical techniques that will be used. Provide the time schedule for the entire project, including beginning and ending dates of the collection of information, completion of report, publication dates, and other actions.

Study results will be published in the form of one or more technical reports. NHTSA may publish in aggregate the results of this data (not separated according to age or sex) as part of a research report and/or future Federal Register published documents. Results may be tabulated by information collected from the Pre-Drive Questionnaire (e.g., age, sex, gender, race, ethnicity).

Only descriptive and inferential statistical analysis methods will be used. Personal information will not be published in the technical reports.

The project development and regulatory approval phases began in 2022. Data collection is planned to begin immediately upon PRA clearance receipt. The overall duration of the project is 66 months. Data collection for the first study is expected to take three months. Data collections for Study 2 and Study 3 is expected to last two months each. Data reduction, the process by which time series (continuous) data from the simulator is converted into discreet measures for analysis, and data analysis will follow each data collection. Completion of technical reports is anticipated within 6 months of the end of data collection.

17. Approval For Not Displaying The Expiration Date Of OMB Approval. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.

NHTSA is not seeking such approval. NHTSA will display the expiration date for OMB approval.

18. Exceptions To The Certification Statement. Explain each exception to the certification statement "Certification for Paperwork Reduction Act Submissions." The required certifications can be found at 5 CFR 1320.9.

No exceptions to the certification are required for this research plan.



1 The Abstract must include the following information: (1) whether responding to the collection is mandatory, voluntary, or required to obtain or retain a benefit; (2) a description of the entities who must respond; (3) whether the collection is reporting (indicate if a survey), recordkeeping, and/or disclosure; (4) the frequency of the collection (e.g., bi-annual, annual, monthly, weekly, as needed); (5) a description of the information that would be reported, maintained in records, or disclosed; (6) a description of who would receive the information; (7) the purpose of the collection; and (8) if a revision, a description of the revision and the change in burden.

2 49 U.S.C. 30101(1).

3 49 U.S.C. 30101(2).

3


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2025-03-21

© 2025 OMB.report | Privacy Policy