この勉強会は終了しました。ご参加ありがとうございました。
Using auditory input while driving is becoming increasingly popular for making distraction-free inputs while driving. However, we argue that auditory input is more than just using speech. Thus, in this work, we explore using Non-Verbal Auditory Input (NVAI) for interacting with smart assistants while driving. Through an online study with 100 participants, we initially investigated users' input preferences for binary, discrete, and continuous data types. After identifying the top three modalities for NVAI, we subsequently conducted an in-person study with 16 participants. In our study, the participants tested these input modalities for three different input data types regarding their accuracy, driver-distraction, and social acceptability, while operating a driving simulator. The results reveal that, although clapping hands for making input was initially preferred in our online survey, it is snapping fingers for binary input and discrete input and humming for making continuous input that is the preferred NVAI modality while driving.
Promisingly, driving is adapting to a Voice User Interface (VUI) that lets drivers utilize diverse applications with little effort. However, the VUI has innate usability issues, such as a turn-taking problem, a short-term memory workload, inefficient controls, and difficulty correcting errors. To overcome these weaknesses, we explored supplementing the VUI with tactile interaction. As an early result, we present the Voice+Tactile interactions that augment the VUI via multi-touch inputs and high-resolution tactile outputs. We designed various Voice+Tactile interactions to support different VUI interaction stages and derived four Voice+Tactile interaction themes: Status Feedback, Input Adjustment, Output Control, and Finger Feedforward. A user study showed that the Voice+Tactile interactions improved the VUI efficiency and its user experiences without incurring significant additional distraction overhead on driving. We hope these early results open new research questions to improve in-vehicle VUI with a tactile channel.
With self-driving vehicles (SDVs), pedestrians cannot rely on communication with the driver anymore. Industry experts and policymakers are proposing an external Human-Machine Interface (eHMI) communicating the automated status. We investigated whether additionally communicating SDVs' intent to give right of way further improves pedestrians' street crossing. To evaluate the stability of these eHMI effects, we conducted a three-session video study with N=34 pedestrians where we assessed subjective evaluations and crossing onset times. This is the first work capturing long-term effects of eHMIs. Our findings add credibility to prior studies by showing that eHMI effects last (acceptance, user experience) or even increase (crossing onset, perceived safety, trust, learnability, reliance) with time. We found that pedestrians benefit from an eHMI communicating SDVs' status, and that additionally communicating SDVs' intent adds further value. We conclude that SDVs should be equipped with an eHMI communicating both status and intent.
Automated driving raises new human factors challenges. There is a paradox that allows drivers to perform non-driving related tasks (NDRTs), while benefiting from a driver who regularly attends to the driving task. Systems that aim to better manage a driver's attention, encouraging task switching and interleaving, may help address this paradox. However, a better understanding of how drivers self-interrupt while engaging in NDRTs is required to inform such systems. This paper presents a counterbalanced within-subject simulator study with N=42 participants experiencing automated driving in a familiar driving environment. Participants chose a TV show to watch on a HUD and mobile display during two 15min drives on the same route. Eye and head tracking data revealed more self-interruptions in the HUD condition, suggesting a higher likelihood of a higher situation awareness. Our results may benefit the design of future attention management systems by informing the visual and temporal integration of the driving and non-driving related task.
Autonomous vehicles (AVs) will redefine interactions between road users. Presently, cyclists and drivers communicate through implicit cues (vehicle motion) and explicit but imprecise signals (hand gestures, horns). Future AVs could consistently communicate awareness and intent and other feedback to cyclists based on their sensor data. We present an exploration of AV-cyclist interaction, starting with preliminary design studies which informed the implementation of an immersive VR AV-cyclist simulator, and the design and evaluation of a number of AV-cyclist interfaces. Our findings suggest that AV-cyclist interfaces can improve rider confidence in lane merging scenarios. We contribute an AV-cyclist immersive simulator, insights on trade-offs of various aspects of AV-cyclist interaction design including modalities, location, and complexity, and positive results suggesting improved rider confidence due to AV-cyclist interaction. While we are encouraged by the potential positive impact AV-cyclist interfaces can have on cyclist culture, we also emphasize the risks over-reliance can pose to cyclists.