Anticipation Before Action: EEG-Based Implicit Intent Detection for Adaptive Gaze Interaction in Mixed Reality

要旨

Mixed Reality (MR) interfaces increasingly rely on gaze for interaction, yet distinguishing visual attention from intentional action remains difficult, leading to the Midas Touch problem. Existing solutions require explicit confirmations, while brain–computer interfaces may provide an implicit marker of intention using Stimulus-Preceding Negativity (SPN). We investigated how Intention (Select vs. Observe) and Feedback (With vs. Without) modulate SPN during gaze-based MR interactions. During realistic selection tasks, we acquired EEG and eye-tracking data from 28 participants.SPN was robustly elicited and sensitive to both factors: observation without feedback produced the strongest amplitudes, while intention to select and expectation of feedback reduced activity, suggesting SPN reflects anticipatory uncertainty rather than motor preparation. Complementary decoding with deep learning models achieved reliable person-dependent classification of user intention, with accuracies ranging from 75% to 97% across participants. These findings identify SPN as an implicit marker for building intention-aware MR interfaces that mitigate the Midas Touch.

著者
Francesco Chiossi
LMU Munich, Munich, Germany
Elnur Imamaliyev
Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
Martin Bleichner
Carl von Ossietzky Universität Oldenburg, Oldenburg, Oldenburg, Germany
Sven Mayer
TU Dortmund University, Dortmund, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Extended Reality & Immersive Systems II

P1 - Room 118
7 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00