TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data

要旨

Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention. We draw from four main classes of attention-capturing methodologies in the literature. ZoomMaps is a novel zoom-based interface that captures viewing on a mobile phone. CodeCharts is a self-reporting methodology that records points of interest at precise viewing durations. ImportAnnots is an "annotation" tool for selecting important image regions, and cursor-based BubbleView lets viewers click to deblur a small area. We compare these methodologies using a common analysis framework in order to develop appropriate use cases for each interface. This toolbox and our analyses provide a blueprint for how to gather attention data at scale without an eye tracker.

キーワード
Eye tracking
attention
crowdsourcing
interaction techniques
著者
Anelise Newman
Massachusetts Institute of Technology, Cambridge, MA, USA
Barry McNamara
Massachusetts Institute of Technology, Cambridge, MA, USA
Camilo Fosco
Massachusetts Institute of Technology, Cambridge, MA, USA
Yun Bin Zhang
Harvard University, Cambridge, MA, USA
Pat Sukhum
Harvard University, Cambridge, MA, USA
Matthew Tancik
University of California, Berkeley, Berkeley, CA, USA
Nam Wook Kim
Boston College, Chestnut Hill, MA, USA
Zoya Bylinskii
Adobe Research, Cambridge, MA, USA
DOI

10.1145/3313831.3376799

論文URL

https://doi.org/10.1145/3313831.3376799

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Look at me

Paper session
311 KAUA'I
5 件の発表
2020-04-27 20:00:00
2020-04-27 21:15:00
日本語まとめ
読み込み中…