GazeConduits: Calibration-Free Cross-Device Collaboration through Gaze and Touch


We present GazeConduits, a calibration-free ad-hoc mobile interaction concept that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently introduced smartphone capabilities to detect facial features and estimate users' gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks users present as well as their gaze direction to determine the tablets they look at. We present a series of techniques using GazeConduits for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table shows that GazeConduits can reliably identify which tablet or collaborator a user is looking at.

Cross-device interaction
gaze input
touch input
Simon Voelker
RWTH Aachen University, Aachen, Germany
Sebastian Hueber
RWTH Aachen University, Aachen, Germany
Christian Holz
ETH Zürich, Zürich, Switzerland
Christian Remy
Aarhus University, Aarhus, Denmark
Nicolai Marquardt
University College London, London, United Kingdom




会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (

セッション: Look at me

Paper session
311 KAUA'I
5 件の発表
2020-04-27 20:00:00
2020-04-27 21:15:00