Chartist: Task-driven Eye Movement Control for Chart Reading

要旨

To design data visualizations that are easy to comprehend, we need to understand how people with different interests read them. Computational models of predicting scanpaths on charts could complement empirical studies by offering estimates of user performance inexpensively; however, previous models have been limited to gaze patterns and overlooked the effects of tasks. Here, we contribute Chartist, a computational model that simulates how users move their eyes to extract information from the chart in order to perform analysis tasks, including value retrieval, filtering, and finding extremes. The novel contribution lies in a two-level hierarchical control architecture. At the high level, the model uses LLMs to comprehend the information gained so far and applies this representation to select a goal for the lower-level controllers, which, in turn, move the eyes in accordance with a sampling policy learned via reinforcement learning. The model is capable of predicting human-like task-driven scanpaths across various tasks. It can be applied in fields such as explainable AI, visualization design evaluation, and optimization. While it displays limitations in terms of generalizability and accuracy, it takes modeling in a promising direction, toward understanding human behaviors in interacting with charts.

著者
Danqing Shi
Aalto University, Helsinki, Finland
Yao Wang
University of Stuttgart, Stuttgart, Germany
Yunpeng Bai
National University of Singapore, Singapore, Singapore
Andreas Bulling
University of Stuttgart, Stuttgart, Germany
Antti Oulasvirta
Aalto University, Helsinki, Finland
DOI

10.1145/3706598.3713128

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713128

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Visualization

G302
7 件の発表
2025-04-29 01:20:00
2025-04-29 02:50:00
日本語まとめ
読み込み中…