AdaptiveVoice: Cognitively Adaptive Voice Interface for Driving Assistance

要旨

Current voice assistants present messages in a predefined format without considering users’ mental states. This paper presents an optimization-based approach to alleviate this issue which adjusts the level of details and speech speed of the voice messages according to the estimated cognitive load of the user. In the first user study (N=12), we investigated the impact of cognitive load on user performance. The findings reveal significant differences in preferred message formats across five cognitive load levels, substantiating the need for voice message adaptation. We then implemented AdaptiveVoice, an algorithm based on combinatorial optimization to generate adaptive voice messages in real-time. In the second user study (N=30) conducted in a VR-simulated driving environment, we compared AdaptiveVoice with a fixed format baseline, with and without visual guidance on the Heads-up display(HUD). Results indicate that users benefit from AdaptiveVoice with reduced response time and improved driving performance, particularly when it is augmented with HUD.

著者
Shaoyue Wen
New York University, New York, New York, United States
Songming Ping
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Jialin Wang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Hai-Ning Liang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Xuhai "Orson" Xu
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Yukang Yan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642876

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Drivers and Pedestrians A

315
5 件の発表
2024-05-14 18:00:00
2024-05-14 19:20:00