In emerging "driver-less" automated vehicles (AVs), the intuitive communication that exists between human drivers and passengers no longer exists, which can lead to reduced trust and acceptance in passengers if they are unclear about what the AV intends to do. This paper contributes the foundational understanding of how passengers naturally decode drivers' non-verbal cues about their intended action to inform intuitive Human-Machine Interface (HMI) designs that try to emulate those cues. Our study investigates what cues passengers perceive, their saliency, and interpretation through a mixed-method approach combining field observations, experience sampling, and auto-confrontation interviews with 30 driver-passenger pairs. Analysis of posture, head/eye movements, and vestibular sensations revealed four categories of intention cues: awareness, interaction, vestibular, and habitual. These findings provide empirical foundations for designing AV interfaces that mirror natural human communication patterns. We discuss implications for designing anthropomorphic HMIs that could enhance trust, predictability, and user experience in AVs.
https://dl.acm.org/doi/10.1145/3706598.3713635
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)