Ubiquitous, smelly & immersive visualization

Paper session

会議の名前
CHI 2020
Techniques for Flexible Responsive Visualization Design
要旨

Responsive visualizations adapt to effectively present information based on the device context. Such adaptations are essential for news content that is increasingly consumed on mobile devices. However, existing tools provide little support for responsive visualization design. We analyze a corpus of 231 responsive news visualizations and discuss formative interviews with five journalists about responsive visualization design. These interviews motivate four central design guidelines: enable simultaneous cross-device edits, facilitate device-specific customization, show cross-device previews, and support propagation of edits. Based on these guidelines, we present a prototype system that allows users to preview and edit multiple visualization versions simultaneously. We demonstrate the utility of the system features by recreating four real-world responsive visualizations from our corpus.

受賞
Best Paper
キーワード
Visualization
Responsive Design
News
Mobile Devices
著者
Jane Hoffswell
University of Washington, Seattle, WA, USA
Wilmot Li
Adobe Research, Seattle, WA, USA
Zhicheng Liu
Adobe Research, Seattle, WA, USA
DOI

10.1145/3313831.3376777

論文URL

https://doi.org/10.1145/3313831.3376777

動画
InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices
要旨

While tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge. In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. By considering standard interface elements (e.g., axes, marks) and grounding our design in a set of core concepts including operations, parameters, targets, and instruments, we systematically develop interactions applicable to different visualization types. To exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus that supports pen, touch, and speech input. Based on a study with 12 participants performing replication and factchecking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems.

受賞
Honorable Mention
キーワード
Multimodal interaction
data visualization
tablet devices
pen
touch
speech
著者
Arjun Srinivasan
Microsoft Research & Georgia Institute of Technology, Atlanta, GA, USA
Bongshin Lee
Microsoft Research, Redmond, WA, USA
Nathalie Henry Riche
Microsoft Research, Redmond, WA, USA
Steven M. Drucker
Microsoft Research, Redmond, WA, USA
Ken Hinckley
Microsoft Research, Redmond, WA, USA
DOI

10.1145/3313831.3376782

論文URL

https://doi.org/10.1145/3313831.3376782

動画
Scents and Sensibility: Evaluating Information Olfactation
要旨

Olfaction---the sense of smell---is one of the least explored of the human senses for conveying abstract information. In this paper, we conduct a comprehensive perceptual experiment on information olfactation: the use of olfactory and cross-modal sensory marks and channels to convey data. More specifically, following the example from graphical perception studies, we design an experiment that studies the perceptual accuracy of four cross-modal sensory channels---scent type, scent intensity, airflow, and temperature---for conveying three different types of data---nominal, ordinal, and quantitative. We also present details of a 24-scent multi-sensory display and its software framework that we designed in order to run this experiment. Our results yield a ranking of olfactory and cross-modal sensory channels that follows similar principles as classic rankings for visual channels.

キーワード
olfactory perception
information olfactation
olfactory displays
scents
smell
evaluation
著者
Andrea Batch
University of Maryland, College Park, MD, USA
Biswaksen Patnaik
University of Maryland, College Park, MD, USA
Moses Akazue
University of Glasgow, Glasgow, United Kingdom
Niklas Elmqvist
University of Maryland, College Park, MD, USA
DOI

10.1145/3313831.3376733

論文URL

https://doi.org/10.1145/3313831.3376733

Urban Mosaic: Visual Exploration of Streetscapes Using Large-Scale Image Data
要旨

Urban planning is increasingly data driven, yet the challenge of designing with data at a city scale and remaining sensitive to the impact at a human scale is as important today as it was for Jane Jacobs. We address this challenge with Urban Mosaic, a tool for exploring the urban fabric through a spatially and temporally dense data set of 7.7 million street-level images from New York City, captured over the period of a year. Working in collaboration with professional practitioners, we use Urban Mosaic to investigate questions of accessibility and mobility, and preservation and retrofitting. In doing so, we demonstrate how tools such as this might provide a bridge between the city and the street, by supporting activities such as visual comparison of geographically distant neighborhoods, and temporal analysis of unfolding urban development.

キーワード
Urban planning
Interactive visualization
Data analysis
Urban data
著者
Fabio Miranda
New York University, New York City, NY, USA
Maryam Hosseini
Rutgers University, New Brunswick, NJ, USA
Marcos Lage
Universidade Federal Fluminense, Niteroi, Brazil
Harish Doraiswamy
New York University, New York, NY, USA
Graham Dove
New York University, New York, NY, USA
Cláudio T. Silva
New York University, New York City, NY, USA
DOI

10.1145/3313831.3376399

論文URL

https://doi.org/10.1145/3313831.3376399

動画
Towards an Understanding of Augmented Reality Extensions for Existing 3D Data Analysis Tools
要旨

We present an observational study with domain experts to understand how augmented reality (AR) extensions to traditional PC-based data analysis tools can help particle physicists to explore and understand 3D data. Our goal is to allow researchers to integrate stereoscopic AR-based visual representations and interaction techniques into their tools, and thus ultimately to increase the adoption of modern immersive analytics techniques in existing data analysis workflows. We use Microsoft's HoloLens as a lightweight and easily maintainable AR headset and replicate existing visualization and interaction capabilities on both the PC and the AR view. We treat the AR headset as a second yet stereoscopic screen, allowing researchers to study their data in a connected multi-view manner. Our results indicate that our collaborating physicists appreciate a hybrid data exploration setup with an interactive AR extension to improve their understanding of particle collision events.

キーワード
Immersive analytics
3D visualization
User interface
Hybrid visualization system
著者
Xiyao Wang
Université Paris-Saclay, CNRS, Inria, LRI, Orsay, France
Lonni Besançon
Linköpings University, Norrköping, Sweden
David Rousseau
Université Paris-Saclay, CNRS, IJCLab, Orsay, France
Mickael Sereno
Université Paris-Saclay, CNRS, Inria, LRI, Orsay, France
Mehdi Ammi
University of Paris 8, Saint-Denis, France
Tobias Isenberg
Université Paris-Saclay, CNRS, Inria, LRI, Orsay, France
DOI

10.1145/3313831.3376657

論文URL

https://doi.org/10.1145/3313831.3376657

動画