InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices

要旨

While tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge. In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. By considering standard interface elements (e.g., axes, marks) and grounding our design in a set of core concepts including operations, parameters, targets, and instruments, we systematically develop interactions applicable to different visualization types. To exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus that supports pen, touch, and speech input. Based on a study with 12 participants performing replication and factchecking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems.

受賞
Honorable Mention
キーワード
Multimodal interaction
data visualization
tablet devices
pen
touch
speech
著者
Arjun Srinivasan
Microsoft Research & Georgia Institute of Technology, Atlanta, GA, USA
Bongshin Lee
Microsoft Research, Redmond, WA, USA
Nathalie Henry Riche
Microsoft Research, Redmond, WA, USA
Steven M. Drucker
Microsoft Research, Redmond, WA, USA
Ken Hinckley
Microsoft Research, Redmond, WA, USA
DOI

10.1145/3313831.3376782

論文URL

https://doi.org/10.1145/3313831.3376782

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Ubiquitous, smelly & immersive visualization

Paper session
316A MAUI
5 件の発表
2020-04-30 01:00:00
2020-04-30 02:15:00
日本語まとめ
読み込み中…