Do You (Dis)agree With Me? Modelling Implicit User Disagreement in Human–AI Interaction Using Gaze Data

要旨

The widespread use of generative AI has led to increased focus on human–AI interaction. However, AI systems can generate unexpected outputs, leading to disagreement or human–AI conflict. This paper focuses on modelling user disagreement using machine learning (ML) by observing users' implicit viewing behaviour. We conducted a controlled study with 30 participants evaluating captions from a simulated ML image-captioning system. Participants indicated agreement or disagreement with each caption while we recorded their gaze and facial-expression data, which we used to predict (dis)agreement. We show that unimodal gaze-based personalised modelling ($0.684$ average balanced accuracy) outperforms generalised modelling ($0.570$), whereas multimodal approaches did not improve performance. Our exploratory post hoc gaze-based analysis highlights the importance of feature selection and temporal dynamics, which help guide system design and future work. We release the dataset to support reproducibility and further work. Due to the nature of this research, we also discuss the potential ethical and privacy implications of continuous passive gaze and facial monitoring.

著者
Abdulrahman Mohamed Selim
German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany
Omair Shahzad Bhatti
German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany
Amr Gomaa
German Research Center for Artificial Intelligence, Saarbrücken, Germany
Michael Barz
German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany
Daniel Sonntag
German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Inferring Human State

P1 - Room 127
7 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00