Context-aware Interfaces for Mobility & Automation

会議の名前
CHI 2026
Towards Inclusive External Human-Machine Interface: Exploring the Effects of Visual and Auditory eHMI for Deaf and Hard-of-Hearing People
要旨

External Human-Machine Interfaces (eHMIs) have been proposed to facilitate communication between Automated Vehicles (AVs) and pedestrians. However, no attention was given to Deaf and Hard-of-Hearing (DHH) people. We conducted a formative study through focus groups with 6 DHH people and 6 key stakeholders (including researchers, assistive technologists, and automotive interface designers) to compare proposed eHMIs and extract key design requirements. Subsequently, we investigated the effects of visual and auditory eHMI in a virtual reality user study with 32 participants (16 DHH). Results from our scenario suggesting that (1) DHH participants spent more time looking at the AV; (2) both visual and auditory eHMIs enhanced trust, usefulness, and perceived safety; and (3) only visual eHMIs reduced the time to step into the road, time looking at the AV, gaze time, and percentage looking at active visual eHMI components. Lastly, we provided five practical implications for making eHMI inclusive of DHH people.

受賞
Honorable Mention
著者
Wenge Xu
Birmingham City University, Birmingham, United Kingdom
Foroogh Hajiseyedjavadi
Birmingham City University, Birmingham, West Midlands, United Kingdom
Kurtis Weir
Birmingham City University, Wolverhampton, West Midlands, United Kingdom
Chukwuemeka Eze
Birmingham City University, Birmingham, -Select-, United Kingdom
Mark Colley
UCL Interaction Centre, London, United Kingdom
動画
User Experience of Autonomous Ferries: What Passengers Need and How to Design for It
要旨

Designing autonomous public transport requires understanding how passengers experience such systems in real-world use. For autonomous ferries, however, little is known about how users interpret waterborne autonomy. We address this gap through post-ride interviews (N=164) from a public trial of an autonomous ferry held in Trondheim, Norway, in 2022. Our thematic analysis identifies several ferry-specific factors that shape the user experience (UX). The themes were formed around sensitivity to motion and docking, the readability of manoeuvres without a visible operator, expectations around on-demand timing, and accessibility challenges at the vessel-quay interface. From these findings, we propose six design guidelines that address embodied experience, transparency of autonomous behaviour, temporal predictability, accessibility across travel chains, and the redistribution of social informational roles traditionally held by the crew. These findings extend land-based autonomous vehicle research by revealing how waterborne contexts shape trust and acceptance. The contribution of this work is a set of actionable design guidelines to achieve a predictable, trustworthy, accessible, and reliable autonomous ferry service.

著者
Felix Petermann
Norwegian University of Science and Technology (NTNU), Trondheim, Norway
Ole Andreas Alsos
Norwegian University of Science and Technology, Trondheim, Norway
Mina Saghafian
Norwegian University of Science and Technology, Trondheim, Norway
Erik Veitch
NTNU, Trondheim, Norway
Grace Winifred. Turner
Newcastle University, Newcastle, United Kingdom
Maria Letizia Potenza
Sintef Industry, Trondheim, Norway
Matching Explanation Detail to Scene Complexity: Studying Situational Awareness-Specific AI Feedback in Pedestrian Encounter Driving Scenarios
要旨

Providing the same level of information through the in-vehicle interface can overwhelm automated vehicle occupants in simple scenarios or leave them underinformed in more demanding situations. This study investigates how human preference for in-vehicle feedback detail scales with scene complexity during pedestrian encounters. We measure scene complexity through driving decision diversity and validate its positive correlation with pedestrian crossing intent uncertainty in an initial experiment (N=68). Using a mock-up in-vehicle interface, the second experiment (N=88) evaluates user preferences for manually crafted feedback concepts simulating three levels of the system's situational awareness. Results indicate that as intent uncertainty increases, users prefer more detailed feedback. While perception-only feedback suffices for simple encounters, in complex situations, information on system comprehension and projection aids better and easier understanding of driving decisions. These findings provide an empirical basis for scaling feedback to situational needs. As this study used manually generated feedback based on ground-truth data, the findings require further investigation considering real-world AI performance in automated vehicles.

著者
Md Fazle Elahi
Purdue University, Indianapolis, Indiana, United States
Yin-Chun Lu
North Carolina State University, Raleigh, North Carolina, United States
Jing Chen
Rice University, Houston, Texas, United States
Renran Tian
North Carolina State University, Raleigh, North Carolina, United States
Small Talk, Big Impact? LLM-based Conversational Agents to Mitigate Passive Fatigue in Conditional Automated Driving
要旨

Passive fatigue during conditional automated driving can compromise driver readiness and safety. This paper presents findings from a test-track study with 40 participants in a real-world automated driving scenario. In this scenario, a Large Language Model (LLM) based conversational agent (CA) was designed to check in with drivers and re-engage them with their surroundings. Drawing on in-car video recordings, sleepiness ratings and interviews, we analysed how drivers interacted with the agent and how these interactions shaped alertness. Results show the CA is helpful for supporting vigilance during passive fatigue. Thematic analysis of acceptability further revealed three user preference profiles that implicate future intention to use CAs. Positioning empirically observed profiles within existing CA archetype frameworks highlights the need for adaptive design sensitive to diverse user groups. This work underscores the potential of CAs as proactive Human–Machine Interface (HMI) interventions, demonstrating how natural language can support context-aware interaction during automated driving.

受賞
Best Paper
著者
Lewis Cockram
Queensland University of Technology , Brisbane, Australia
Yueteng Yu
Queensland University of Technology, Brisbane, Australia
Jorge Pardo
Queensland University of Technology, Brisbane, Australia
Xiaomeng Li
Queensland University of Technology, Brisbane, Queensland, Australia
Andry Rakotonirainy
Queensland University of Technology, Brisbane, Queensland, Australia
Jonny Kuo
Seeing Machines, Melbourne, Australia
Sebastien Demmel
Queensland University of Technology, Brisbane, Australia
Mike Lenné
Seeing Machines, Melbourne, Australia
Ronald Schroeter
Queensland University of Technology, Brisbane, Australia
動画
From Disruption to Immersion: Reimagining Vehicle Motion as Environmental Feedback through Force Mappings in In-Car VR
要旨

This study investigates how vehicle motion can be reinterpreted as perceptually coherent multisensory feedback for in-car VR applications, expanding beyond traditional motion-based experiences. We introduce the concept of force mappings, a design space that translates vehicle-induced physical forces such as from accelerations, turns, and rough terrain into ambient environmental representations within VR. Implemented on a real vehicle platform with a sensor-based pipeline, our system applies four representative mapping strategies (Ground-based, Wind-based, Current-based, Object-based) and evaluates their perceptual coherence and experiential effects through two respective user studies. Results show that force mappings improve presence, comfort, and engagement while enabling creative reinterpretations of physical motion. Finally, we provide empirical findings and design considerations to inform future in-car VR systems that leverage real-world motion as a creative and perceptually grounded interaction resource.

著者
Bocheon Gim
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Seongjun Kang
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Gwangbin Kim
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Dohyeon Yeo
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Yumin Kang
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Ahmed Elsharkawy
Gwangju Institute of Science and Technology, Gwangju , Korea, Republic of
SeungJun Kim
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
動画
eHMI for All - Investigating the Effect of External Communication of Automated Vehicles on Pedestrians, Manual Drivers, and Cyclists
要旨

With automated vehicles (AVs), the absence of a human operator could necessitate external Human-Machine Interfaces (eHMIs) to communicate with other road users. Existing research primarily focuses on pedestrian-AV interactions, with limited attention given to other road users, such as cyclists and drivers of manually driven vehicles. So far, no studies have compared the effects of eHMIs across these three road user roles. Therefore, we conducted a within-subjects virtual reality experiment (N=40), evaluating the subjective and objective impact of an eHMI communicating the AV's intention to pedestrians, cyclists, and drivers under various levels of distraction (no distraction, visual noise, interference). eHMIs positively influenced safety perceptions, trust, perceived usefulness, and mental demand across all roles. While distraction and road user roles showed significant main effects, interaction effects were only observed in perceived usability. Thus, a unified eHMI design is effective, facilitating the standardization and broader adoption of eHMIs in diverse traffic.

著者
Mark Colley
Ulm University, Ulm, Germany
Simon Kopp
Universität Ulm, Ulm, Germany
Debargha Dey
Eindhoven University of Technology, Eindhoven, Netherlands
Pascal Jansen
Ulm University, Ulm, Baden-Württemberg, Germany
Enrico Rukzio
University of Ulm, Ulm, Germany
Exploring the Impacts of Background Noise on Auditory Stimuli of Audio-Visual eHMIs for Hearing, Deaf, and Hard-of-Hearing People
要旨

External Human-Machine Interfaces (eHMIs) have been proposed to enhance communication between automated vehicles (AVs) and pedestrians, with growing interest in multi-modal designs such as audio-visual eHMIs. Just as poor lighting can impair visual cues, a loud background noise may mask the auditory stimuli. However, its effects within these systems have not been examined, and little is known about how pedestrians --- particularly Deaf and Hard-of-Hearing (DHH) people --- perceive different types of auditory stimuli. We conducted a virtual reality study (Hearing N=25, DHH N=11) to examine the effects of background noise (quiet and loud) on auditory stimuli (baseline, bell, speech) within an audio-visual eHMI. Results revealed that: (1) Crossing experiences of DHH pedestrians significantly differ from Hearing pedestrians. (2) Loud background noise adversely affects pedestrians' crossing experiences. (3) Providing an additional auditory eHMI (bell/speech) improves crossing experiences. We outlined four practical implications for future eHMI design and research.

著者
Wenge Xu
Birmingham City University, Birmingham, United Kingdom
Foroogh Hajiseyedjavadi
Birmingham City University, Birmingham, West Midlands, United Kingdom
Debargha Dey
Eindhoven University of Technology, Eindhoven, Netherlands
Tram Thi Minh. Tran
School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia
Mark Colley
UCL Interaction Centre, London, United Kingdom
動画