Integrating Gaze and Speech for Enabling Implicit Interactions

要旨

Gaze and speech are rich contextual sources of information that, when combined, can result in effective and rich multimodal interactions. This paper proposes a machine learning-based pipeline that leverages and combines users’ natural gaze activity, the semantic knowledge from their vocal utterances and the synchronicity between gaze and speech data to facilitate users’ interaction. We evaluated our proposed approach on an existing dataset, which involved 32 participants recording voice notes while reading an academic paper. Using a Logistic Regression classifier, we demonstrate that our proposed multimodal approach maps voice notes with accurate text passages with an average 𝐹1-Score of 0.90. Our proposed pipeline motivates the design of multimodal interfaces that combines natural gaze and speech patterns to enable robust interactions

著者
Anam Ahmad Khan
The University of Melbourne, Melbourne, Victoria, Australia
Joshua Newn
The University of Melbourne, Melbourne, VIC, Australia
James Bailey
The University of Melbourne, Melbourne, Victoria, Australia
Eduardo Velloso
University of Melbourne, Melbourne, Victoria, Australia
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502134

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Input Techniques

292
5 件の発表
2022-05-03 20:00:00
2022-05-03 21:15:00