Speech-Augmented Cone-of-Vision for Exploratory Data Analysis

要旨

Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.

著者
Riccardo Bovo
Imperial College London, London, United Kingdom
Daniele Giunchi
University College London, London, United Kingdom
Ludwig Sidenmark
Lancaster University, Lancaster, United Kingdom
Joshua Newn
Lancaster University, Lancaster, Lancashire, United Kingdom
Hans Gellersen
Aarhus University, Aarhus, Denmark
Enrico Costanza
UCL Interaction Centre, London, United Kingdom
Thomas Heinis
Imperial College, London, United Kingdom
論文URL

https://doi.org/10.1145/3544548.3581283

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Data for Productivity

Hall B
6 件の発表
2023-04-24 20:10:00
2023-04-24 21:35:00