Novel Visualization Techniques

[A] Paper Room 09, 2021-05-11 17:00:00~2021-05-11 19:00:00 / [B] Paper Room 09, 2021-05-12 01:00:00~2021-05-12 03:00:00 / [C] Paper Room 09, 2021-05-12 09:00:00~2021-05-12 11:00:00

会議の名前
CHI 2021
Grand Challenges in Immersive Analytics
要旨

Immersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.

著者
Barrett Ens
Monash University, Melbourne, Australia
Benjamin Bach
Edinburgh University, Edinburgh, United Kingdom
Maxime Cordeil
Monash University, Melbourne, Australia
Ulrich Engelke
CSIRO, Kensington, WA, Australia
Marcos Serrano
IRIT - Elipse, Toulouse, France
Wesley Willett
University of Calgary, Calgary, Alberta, Canada
Arnaud Prouzeau
Monash University, Melbourne, Australia
Christoph Anthes
University of Applied Sciences Upper Austria, Hagenberg, Austria
Wolfgang Büschel
Technische Universität Dresden, Dresden, Germany
Cody Dunne
Northeastern University, Boston, Massachusetts, United States
Tim Dwyer
Monash University, Melbourne, Australia
Jens Grubert
Coburg University, Coburg, Bavaria, Germany
Jason Haga
AIST, Tsukuba, Ibaraki, Japan
Nurit Kirshenbaum
University of Hawaii at Manoa, Honolulu, Hawaii, United States
Dylan Kobayashi
University of Hawaiʻi at Mānoa, Honolulu, Hawaii, United States
Tica Lin
Harvard University, Cambridge, Massachusetts, United States
Monsurat Olaosebikan
Tufts University, Medford, Massachusetts, United States
Fabian Pointecker
University of Applied Sciences Upper Austria, Hagenberg, Austria
David Saffo
Northeastern University, Boston, Massachusetts, United States
Nazmus Saquib
MIT, Cambridge, Massachusetts, United States
Dieter Schmalstieg
Graz University of Technology, Graz, Austria
Danielle Albers. Szafir
University of Colorado Boulder, Boulder, Colorado, United States
Matt Whitlock
University of Colorado, Boulder, Colorado, United States
Yalong Yang
Harvard University, Cambridge, Massachusetts, United States
DOI

10.1145/3411764.3446866

論文URL

https://doi.org/10.1145/3411764.3446866

動画
Quantitative Data Visualisation on Virtual Globes
要旨

Geographic data visualisation on virtual globes is intuitive and widespread, but has not been thoroughly investigated. We explore two main design factors for quantitative data visualisation on virtual globes: i)~commonly used primitives (\textit{2D bar}, \textit{3D bar}, \textit{circle}) and ii)~the orientation of these primitives (\textit{tangential}, \textit{normal}, \textit{billboarded)}. We evaluate five distinctive visualisation idioms in a user study with 50 participants. The results show that aligning primitives tangentially on the globe’s surface decreases the accuracy of area-proportional circle visualisations, while the orientation does not have a significant effect on the accuracy of length-proportional bar visualisations. We also find that tangential primitives induce higher perceived mental load than other orientations. Guided by these results we design a novel globe visualisation idiom, \textit{Geoburst}, that combines a virtual globe and a radial bar chart. A preliminary evaluation reports potential benefits and drawbacks of the \textit{Geoburst} visualisation.

著者
Kadek Ananta. Satriadi
Monash University, Melbourne, Australia
Barrett Ens
Monash University, Melbourne, Australia
Tobias Czauderna
Monash University, Victoria, Australia
Maxime Cordeil
Monash University, Melbourne, Australia
Bernhard Jenny
Monash University, Melbourne, Australia
DOI

10.1145/3411764.3445152

論文URL

https://doi.org/10.1145/3411764.3445152

動画
Towards an Understanding of Situated AR Visualization for Basketball Free-Throw Training
要旨

We present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.

受賞
Honorable Mention
著者
Tica Lin
Harvard University, Cambridge, Massachusetts, United States
Rishi Singh
Harvard University, Cambridge, Massachusetts, United States
Yalong Yang
Harvard University, Cambridge, Massachusetts, United States
Carolina Nobre
Harvard University, Cambridge, Massachusetts, United States
Johanna Beyer
Harvard University, Cambridge, Massachusetts, United States
Maurice Smith
Harvard University, Cambridge, Massachusetts, United States
Hanspeter Pfister
Harvard University, Cambridge, Massachusetts, United States
DOI

10.1145/3411764.3445649

論文URL

https://doi.org/10.1145/3411764.3445649

動画
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
要旨

Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.

受賞
Honorable Mention
著者
Young-Ho Kim
University of Maryland, College Park, Maryland, United States
Bongshin Lee
Microsoft Research, Redmond, Washington, United States
Arjun Srinivasan
Tableau Research, Seattle, Washington, United States
Eun Kyoung Choe
University of Maryland, College Park, Maryland, United States
DOI

10.1145/3411764.3445421

論文URL

https://doi.org/10.1145/3411764.3445421

動画
Haptic and Visual Comprehension of a 2D Graph Layout Through Physicalisation
要旨

Data physicalisations afford people the ability to directly interact with data using their hands, potentially achieving a more comprehensive understanding of a dataset. Due to their complex nature, the representation of graphs and networks could benefit from physicalisation, bringing the dataset from the digital world into the physical one. However, no empirical work exists investigating the effects physicalisations have upon comprehension as they relate to graph representations. In this work, we present initial design considerations for graph physicalisations, as well as an empirical study investigating differences in comprehension between virtual and physical representations. We found that participants perceived themselves as being more accurate via touch and sight (visual-haptic) than the graphical-only modality, and perceived a triangle count task as less difficult in visual-haptic than in the graphical-only modality. Additionally, we found that participants significantly preferred interacting with visual-haptic over other conditions, despite no significant effect on task time or error.

著者
Adam Drogemuller
University of South Australia, Mawson Lakes, South Australia, Australia
Andrew Cunningham
University of South Australia, Adelaide, Australia
James A. Walsh
University of South Australia, Mawson Lakes, South Australia, Australia
James Baumeister
University of South Australia, Adelaide, South Australia, Australia
Ross T. Smith
University of South Australia, Adelaide, Australia
Bruce H. Thomas
University of South Australia, Mawson Lakes, South Australia, Australia
DOI

10.1145/3411764.3445704

論文URL

https://doi.org/10.1145/3411764.3445704

動画
Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations
要旨

Natural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.

著者
Arjun Srinivasan
Tableau Research, Seattle, Washington, United States
Nikhila Nyapathy
Georgia Institute of Technology, Atlanta, Georgia, United States
Bongshin Lee
Microsoft Research, Redmond, Washington, United States
Steven M.. Drucker
Microsoft Research, Redmond, Washington, United States
John Stasko
Georgia Institute of Technology, Atlanta, Georgia, United States
DOI

10.1145/3411764.3445400

論文URL

https://doi.org/10.1145/3411764.3445400

動画
It's a Wrap: Toroidal Wrapping of Network Visualisations Supports Cluster Understanding Tasks
要旨

We explore network visualisation on a two-dimensional torus topology that continuously wraps when the viewport is panned. That is, links may be “wrapped” across the boundary, allowing additional spreading of node positions to reduce visual clutter. Recent work has investigated such pannable wrapped visualisations, finding them not worse than unwrapped drawings for small networks for path-following tasks. However, they did not evaluate larger networks nor did they consider whether torus-based layout might also better display high-level network structure like clusters. We offer two algorithms for improving toroidal layout that is completely autonomous and automatic panning of the viewport to minimiswe wrapping links. The resulting layouts afford fewer crossings, less stress, and greater cluster separation. In a study of 32 participants comparing performance in cluster understanding tasks, we find that toroidal visualisation offers significant benefits over standard unwrapped visualisation in terms of improvement in error by 62.7% and time by 32.3%.

著者
Kun-Ting Chen
Faculty of Information Technology, Monash University, Melbourne, Australia
Tim Dwyer
Monash University, Melbourne, VIC, Australia
Benjamin Bach
Edinburgh University, Edinburgh, United Kingdom
Kim Marriott
Monash University, Melbourne, Australia
DOI

10.1145/3411764.3445439

論文URL

https://doi.org/10.1145/3411764.3445439

動画
Sketchnote Components, Design Space Dimensions, and Strategies for Effective Visual Note Taking
要旨

Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.

受賞
Honorable Mention
著者
Rebecca Zheng
University College London, London, United Kingdom
Marina Fernández Camporro
University College London, London, United Kingdom
Hugo Romat
ETH, Zurich, Switzerland
Nathalie Henry Riche
Microsoft Research, Redmond, Washington, United States
Benjamin Bach
Edinburgh University, Edinburgh, United Kingdom
Fanny Chevalier
University of Toronto, Toronto, Ontario, Canada
Ken Hinckley
Microsoft Research, Redmond, Washington, United States
Nicolai Marquardt
University College London, London, United Kingdom
DOI

10.1145/3411764.3445508

論文URL

https://doi.org/10.1145/3411764.3445508

動画
Datamations: Animated Explanations of Data Analysis Pipelines
要旨

Plots and tables are commonplace in today's data-driven world, and much research has been done on how to make these figures easy to read and understand. Often times, however, the information they contain conveys only the end result of a complex and subtle data analysis pipeline. This can leave the reader struggling to understand what steps were taken to arrive at a figure, and what implications this has for the underlying results. In this paper, we introduce datamations, which are animations designed to explain the steps that led to a given plot or table. We present the motivation and concept behind datamations, discuss how to programmatically generate them, and provide the results of two large-scale randomized experiments investigating how datamations affect people's abilities to understand potentially puzzling results compared to seeing only final plots and tables containing those results.

著者
Xiaoying Pu
University of Michigan, Ann Arbor, Michigan, United States
Sean Kross
The University of California San Diego, La Jolla, California, United States
Jake M. Hofman
Microsoft Research, NYC, New York, United States
Daniel G. Goldstein
Microsoft Research, New York, New York, United States
DOI

10.1145/3411764.3445063

論文URL

https://doi.org/10.1145/3411764.3445063

動画
MARVIS: Combining Mobile Devices and Augmented Reality for Visual Data Analysis
要旨

We present MARVIS, a conceptual framework that combines mobile devices and head-mounted Augmented Reality (AR) for visual data analysis. We propose novel concepts and techniques addressing visualization-specific challenges. By showing additional 2D and 3D information around and above displays, we extend their limited screen space. AR views between displays as well as linking and brushing are also supported, making relationships between separated visualizations plausible. We introduce the design process and rationale for our techniques. To validate MARVIS' concepts and show their versatility and widespread applicability, we describe six implemented example use cases. Finally, we discuss insights from expert hands-on reviews. As a result, we contribute to a better understanding of how the combination of one or more mobile devices with AR can benefit visual data analysis. By exploring this new type of visualization environment, we hope to provide a foundation and inspiration for future mobile data visualizations.

著者
Ricardo Langner
Technische Universität Dresden, Dresden, Germany
Marc Satkowski
Technische Universität Dresden, Dresden, Germany
Wolfgang Büschel
Technische Universität Dresden, Dresden, Germany
Raimund Dachselt
Technische Universität Dresden, Dresden, Germany
DOI

10.1145/3411764.3445593

論文URL

https://doi.org/10.1145/3411764.3445593

動画
STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics
要旨

Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.

著者
Sebastian Hubenschmid
University of Konstanz, Konstanz, Germany
Johannes Zagermann
University of Konstanz, Konstanz, Germany
Simon Butscher
University of Konstanz, Konstanz, Germany
Harald Reiterer
University of Konstanz, Konstanz, Germany
DOI

10.1145/3411764.3445298

論文URL

https://doi.org/10.1145/3411764.3445298

動画
MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction Data
要旨

In this paper, we present MIRIA, a Mixed Reality Interaction Analysis toolkit designed to support the in-situ visual analysis of user interaction in mixed reality and multi-display environments. So far, there are few options to effectively explore and analyze interaction patterns in such novel computing systems. With MIRIA, we address this gap by supporting the analysis of user movement, spatial interaction, and event data by multiple, co-located users directly in the original environment. Based on our own experiences and an analysis of the typical data, tasks, and visualizations used in existing approaches, we identify requirements for our system. We report on the design and prototypical implementation of MIRIA, which is informed by these requirements and offers various visualizations such as 3D movement trajectories, position heatmaps, and scatterplots. To demonstrate the value of MIRIA for real-world analysis tasks, we conducted expert feedback sessions using several use cases with authentic study data.

著者
Wolfgang Büschel
Technische Universität Dresden, Dresden, Germany
Anke Lehmann
Technische Universität Dresden, Dresden, Germany
Raimund Dachselt
Technische Universität Dresden, Dresden, Germany
DOI

10.1145/3411764.3445651

論文URL

https://doi.org/10.1145/3411764.3445651

動画
Reconfiguration Strategies with Composite Data Physicalizations
要旨

Composite data physicalizations allow for the physical reconfiguration of data points, creating new opportunities for interaction and engagement. However, there is a lack of understanding of people's strategies and behaviors when directly manipulating physical data objects. In this paper, we systematically characterize different reconfiguration strategies using six exemplar physicalizations. We asked 20 participants to reorganize these exemplars with two levels of restriction: changing a single data object versus changing multiple data objects. Our findings show that there were two main reconfiguration strategies used: changes in proximity and changes in atomic orientation. We further characterize these using concrete examples of participant actions in relation to the structure of the physicalizations. We contribute an overview of reconfiguration strategies, which informs the design of future manually reconfigurable and dynamic composite physicalizations.

著者
Kim Sauvé
Lancaster University, Lancaster, United Kingdom
David Verweij
Newcastle University, Newcastle upon Tyne, United Kingdom
Jason Alexander
University of Bath, Bath, United Kingdom
Steven Houben
Lancaster University, Lancaster, United Kingdom
DOI

10.1145/3411764.3445746

論文URL

https://doi.org/10.1145/3411764.3445746

動画
Scene-Aware Behavior Synthesis for Virtual Pets in Mixed Reality
要旨

Virtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.

著者
Wei Liang
Beijing Institute of Technology, Beijing, China
Xinzhe Yu
Beijing Institute of Technology, Beijing, China
Rawan Alghofaili
George Mason University, Fairfax, Virginia, United States
Yining Lang
Alibaba Group, beijing, China
Lap-Fai Yu
George Mason University, Fairfax, Virginia, United States
DOI

10.1145/3411764.3445532

論文URL

https://doi.org/10.1145/3411764.3445532

動画
Data-Driven Mark Orientation for Trend Estimation in Scatterplots
要旨

A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.

著者
Tingting Liu
School of Computer Science, Qingdao, Shandong, China
Xiaotong Li
School of Computer Science, Qingdao, Shandong, China
Chen Bao
Shandong University, Qingdao, Shandong, China
Michael Correll
Tableau Software, Seattle, Washington, United States
Changehe Tu
Shandong Univ., Qingdao, China
Oliver Deussen
University of Konstanz, Konstanz, Germany
Yunhai Wang
Shandong University, Qingdao, China
DOI

10.1145/3411764.3445751

論文URL

https://doi.org/10.1145/3411764.3445751

動画