Looking but Not Focusing: Defining Gaze-Based Indices of Attention Lapses and Classifying Attentional States

要旨

Identifying objective markers of attentional states is critical, particularly in real-world scenarios where attentional lapses have serious consequences. In this study, we identified gaze-based indices of attentional lapses and validated them by examining their impact on the performance of classification models. We designed a virtual reality visual search task that encouraged active eye movements to define dynamic gaze-based metrics of different attentional states (zone in/out). The results revealed significant differences in both reactive ocular features, such as first fixation and saccade onset latency, and global ocular features, such as saccade amplitude, depending on the attentional state. Moreover, the performance of the classification models improved significantly when trained only on the proven gaze-based and behavioral indices rather than all available features, with the highest prediction accuracy of 79.3%. We highlight the importance of the preliminary studies before model training and provide generalizable gaze-based indices of attentional states for practical applications.

著者
Eugene Hwang
KAIST, Daejeon, Korea, Republic of
Jeongmi Lee
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3706598.3714269

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714269

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Multimodal Interaction

G302
7 件の発表
2025-04-30 18:00:00
2025-04-30 19:30:00
日本語まとめ
読み込み中…