Multi-Modal eHMIs: The Relative Impact of Light and Sound in AV-Pedestrian Interaction

要旨

External Human-Machine Interfaces (eHMIs) have been evaluated to facilitate interactions between Automated Vehicles (AVs) and pedestrians. Most eHMIs are, however, visual/ light-based solutions, and multi-modal eHMIs have received little attention to date. We ran an experimental video study (N = 29) to systematically understand the effect on pedestrian's willingness to cross the road and user preferences of a light-based eHMI (light bar on the bumper) and two sound-based eHMIs (bell sound and droning sound), and combinations thereof. We found no objective change in pedestrians' willingness to cross the road based on the nature of eHMI, although people expressed different subjective preferences for the different ways an eHMI may communicate, and sometimes even strong dislike for multi-modal eHMIs. This shows that the modality of the evaluated eHMI concepts had relatively little impact on their effectiveness. Consequently, this lays an important groundwork for accessibility considerations of future eHMIs, and points towards the insight that provisions can be made for taking user preferences into account without compromising effectiveness.

著者
Debargha Dey
Cornell Tech, New York, New York, United States
Toros Ufuk. Senan
Eindhoven University of Technology, Eindhoven, Netherlands
Bart Hengeveld
Eindhoven University of Technology, Eindhoven, Netherlands
Mark Colley
Ulm University, Ulm, Germany
Azra Habibovic
Scania CV, Gothenburg, Sweden
Wendy Ju
Cornell Tech, New York, New York, United States
論文URL

doi.org/10.1145/3613904.3642031

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Autonomous Vehicles

317
5 件の発表
2024-05-15 23:00:00
2024-05-16 00:20:00