Interactions around the vehicle

Paper session

会議の名前
CHI 2020
Non-Verbal Auditory Input for Controlling Binary, Discrete, and Continuous Input in Automotive User Interfaces
要旨

Using auditory input while driving is becoming increasingly popular for making distraction-free inputs while driving. However, we argue that auditory input is more than just using speech. Thus, in this work, we explore using Non-Verbal Auditory Input (NVAI) for interacting with smart assistants while driving. Through an online study with 100 participants, we initially investigated users' input preferences for binary, discrete, and continuous data types. After identifying the top three modalities for NVAI, we subsequently conducted an in-person study with 16 participants. In our study, the participants tested these input modalities for three different input data types regarding their accuracy, driver-distraction, and social acceptability, while operating a driving simulator. The results reveal that, although clapping hands for making input was initially preferred in our online survey, it is snapping fingers for binary input and discrete input and humming for making continuous input that is the preferred NVAI modality while driving.

キーワード
Voice-User Interface
Non-Verbal Auditory Interaction
Automotive User Interfaces
Speech Input
著者
Markus Funk
Cerence GmbH, Ulm, Germany
Vanessa Tobisch
Cerence GmbH, Ulm, Germany
Adam Emfield
Cerence Inc., Burlington, MA, USA
DOI

10.1145/3313831.3376816

論文URL

https://doi.org/10.1145/3313831.3376816

Voice+Tactile: Augmenting In-vehicle Voice User Interface with Tactile Touchpad Interaction
要旨

Promisingly, driving is adapting to a Voice User Interface (VUI) that lets drivers utilize diverse applications with little effort. However, the VUI has innate usability issues, such as a turn-taking problem, a short-term memory workload, inefficient controls, and difficulty correcting errors. To overcome these weaknesses, we explored supplementing the VUI with tactile interaction. As an early result, we present the Voice+Tactile interactions that augment the VUI via multi-touch inputs and high-resolution tactile outputs. We designed various Voice+Tactile interactions to support different VUI interaction stages and derived four Voice+Tactile interaction themes: Status Feedback, Input Adjustment, Output Control, and Finger Feedforward. A user study showed that the Voice+Tactile interactions improved the VUI efficiency and its user experiences without incurring significant additional distraction overhead on driving. We hope these early results open new research questions to improve in-vehicle VUI with a tactile channel.

キーワード
Voice user interface
tactile feedback touchpad
in-vehicle user interface
著者
Jingun Jung
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Sangyoon Lee
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Jiwoo Hong
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Eunhye Youn
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Geehyuk Lee
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
DOI

10.1145/3313831.3376863

論文URL

https://doi.org/10.1145/3313831.3376863

動画
A Longitudinal Video Study on Communicating Status and Intent for Self-Driving Vehicle – Pedestrian Interaction
要旨

With self-driving vehicles (SDVs), pedestrians cannot rely on communication with the driver anymore. Industry experts and policymakers are proposing an external Human-Machine Interface (eHMI) communicating the automated status. We investigated whether additionally communicating SDVs' intent to give right of way further improves pedestrians' street crossing. To evaluate the stability of these eHMI effects, we conducted a three-session video study with N=34 pedestrians where we assessed subjective evaluations and crossing onset times. This is the first work capturing long-term effects of eHMIs. Our findings add credibility to prior studies by showing that eHMI effects last (acceptance, user experience) or even increase (crossing onset, perceived safety, trust, learnability, reliance) with time. We found that pedestrians benefit from an eHMI communicating SDVs' status, and that additionally communicating SDVs' intent adds further value. We conclude that SDVs should be equipped with an eHMI communicating both status and intent.

キーワード
Self-driving vehicles
pedestrians
external Human-Machine Interface
status
intent
information need
著者
Stefanie M. Faas
Mercedes-Benz AG & Ulm University, Boeblingen, Germany
Andrea C. Kao
Mercedes-Benz R&D North America, Sunnyvale, CA, USA
Martin Baumann
Ulm University, Ulm, Germany
DOI

10.1145/3313831.3376484

論文URL

https://doi.org/10.1145/3313831.3376484

Self-Interruptions of Non-Driving Related Tasks in Automated Vehicles: Mobile vs Head-Up Display
要旨

Automated driving raises new human factors challenges. There is a paradox that allows drivers to perform non-driving related tasks (NDRTs), while benefiting from a driver who regularly attends to the driving task. Systems that aim to better manage a driver's attention, encouraging task switching and interleaving, may help address this paradox. However, a better understanding of how drivers self-interrupt while engaging in NDRTs is required to inform such systems. This paper presents a counterbalanced within-subject simulator study with N=42 participants experiencing automated driving in a familiar driving environment. Participants chose a TV show to watch on a HUD and mobile display during two 15min drives on the same route. Eye and head tracking data revealed more self-interruptions in the HUD condition, suggesting a higher likelihood of a higher situation awareness. Our results may benefit the design of future attention management systems by informing the visual and temporal integration of the driving and non-driving related task.

キーワード
Conditionally Automated Vehicles
Self-Interruption
Attention Management
Non-driving related Task
Task Engagement
Human-automation Interaction
著者
Michael A. Gerber
Queensland University of Technology, Brisbane, QLD, Australia
Ronald Schroeter
Queensland University of Technology, Brisbane, QLD, Australia
Li Xiaomeng
Queensland University of Technology, Brisbane, QLD, Australia
Mohammed Mamdouh Zakaria Elhenawy
Queensland University of Technology, Brisbane, QLD, Australia
DOI

10.1145/3313831.3376751

論文URL

https://doi.org/10.1145/3313831.3376751

動画
Autonomous Vehicle-Cyclist Interaction: Peril and Promise
要旨

Autonomous vehicles (AVs) will redefine interactions between road users. Presently, cyclists and drivers communicate through implicit cues (vehicle motion) and explicit but imprecise signals (hand gestures, horns). Future AVs could consistently communicate awareness and intent and other feedback to cyclists based on their sensor data. We present an exploration of AV-cyclist interaction, starting with preliminary design studies which informed the implementation of an immersive VR AV-cyclist simulator, and the design and evaluation of a number of AV-cyclist interfaces. Our findings suggest that AV-cyclist interfaces can improve rider confidence in lane merging scenarios. We contribute an AV-cyclist immersive simulator, insights on trade-offs of various aspects of AV-cyclist interaction design including modalities, location, and complexity, and positive results suggesting improved rider confidence due to AV-cyclist interaction. While we are encouraged by the potential positive impact AV-cyclist interfaces can have on cyclist culture, we also emphasize the risks over-reliance can pose to cyclists.

キーワード
autonomous vehicle cyclist interaction
interfaces for communicating intent and awareness
著者
Ming Hou
University of Calgary, Calgary, AB, Canada
Karthik Mahadevan
University of Toronto, Toronto, ON, Canada
Sowmya Somanath
University of Victoria, Victoria, BC, Canada
Ehud Sharlin
University of Calgary, Calgary, AB, Canada
Lora Oehlberg
University of Calgary, Calgary, AB, Canada
DOI

10.1145/3313831.3376884

論文URL

https://doi.org/10.1145/3313831.3376884

動画