Adapting User Interfaces with Model-based Reinforcement Learning

要旨

Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user – for example, due to surprise or relearning effort – or "trap" the process to a suboptimal design immaturely. However, effects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy: It finds beneficial changes when there are such and avoids changes when there are none. Our model-based reinforcement learning method plans sequences of adaptations and consults predictive HCI models to estimate their effects. We present empirical and simulation results from the case of adaptive menus, showing that the method outperforms both a non-adaptive and a frequency-based policy.

著者
Kashyap Todi
Aalto University, Helsinki, Finland
Gilles Bailly
Sorbonne Université, CNRS, ISIR, Paris, France
Luis Leiva
University of Luxembourg, Luxembourg City, Luxembourg
Antti Oulasvirta
Aalto University, Helsinki, Finland
DOI

10.1145/3411764.3445497

論文URL

https://doi.org/10.1145/3411764.3445497

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Computational Design

[A] Paper Room 02, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [B] Paper Room 02, 2021-05-13 01:00:00~2021-05-13 03:00:00 / [C] Paper Room 02, 2021-05-13 09:00:00~2021-05-13 11:00:00
Paper Room 02
15 件の発表
2021-05-12 17:00:00
2021-05-12 19:00:00
日本語まとめ
読み込み中…