External Human-Machine Interfaces (eHMIs) have been evaluated to facilitate interactions between Automated Vehicles (AVs) and pedestrians. Most eHMIs are, however, visual/ light-based solutions, and multi-modal eHMIs have received little attention to date. We ran an experimental video study (N = 29) to systematically understand the effect on pedestrian's willingness to cross the road and user preferences of a light-based eHMI (light bar on the bumper) and two sound-based eHMIs (bell sound and droning sound), and combinations thereof. We found no objective change in pedestrians' willingness to cross the road based on the nature of eHMI, although people expressed different subjective preferences for the different ways an eHMI may communicate, and sometimes even strong dislike for multi-modal eHMIs. This shows that the modality of the evaluated eHMI concepts had relatively little impact on their effectiveness. Consequently, this lays an important groundwork for accessibility considerations of future eHMIs, and points towards the insight that provisions can be made for taking user preferences into account without compromising effectiveness.
https://doi.org/10.1145/3613904.3642031
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)