Immersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and human-computer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics.
https://doi.org/10.1145/3411764.3446866
Geographic data visualisation on virtual globes is intuitive and widespread, but has not been thoroughly investigated. We explore two main design factors for quantitative data visualisation on virtual globes: i)~commonly used primitives (\textit{2D bar}, \textit{3D bar}, \textit{circle}) and ii)~the orientation of these primitives (\textit{tangential}, \textit{normal}, \textit{billboarded)}. We evaluate five distinctive visualisation idioms in a user study with 50 participants. The results show that aligning primitives tangentially on the globe’s surface decreases the accuracy of area-proportional circle visualisations, while the orientation does not have a significant effect on the accuracy of length-proportional bar visualisations. We also find that tangential primitives induce higher perceived mental load than other orientations. Guided by these results we design a novel globe visualisation idiom, \textit{Geoburst}, that combines a virtual globe and a radial bar chart. A preliminary evaluation reports potential benefits and drawbacks of the \textit{Geoburst} visualisation.
https://doi.org/10.1145/3411764.3445152
We present an observational study to compare co-located and situated real-time visualizations in basketball free-throw training. Our goal is to understand the advantages and concerns of applying immersive visualization to real-world skill-based sports training and to provide insights for designing AR sports training systems. We design both a situated 3D visualization on a head-mounted display and a 2D visualization on a co-located display to provide immediate visual feedback on a player's shot performance. Using a within-subject study design with experienced basketball shooters, we characterize user goals, report on qualitative training experiences, and compare the quantitative training results. Our results show that real-time visual feedback helps athletes refine subsequent shots. Shooters in our study achieve greater angle consistency with our visual feedback. Furthermore, AR visualization promotes an increased focus on body form in athletes. Finally, we present suggestions for the design of future sports AR studies.
https://doi.org/10.1145/3411764.3445649
Most mobile health apps employ data visualization to help people view their health and activity data, but these apps provide limited support for visual data exploration. Furthermore, despite its huge potential benefits, mobile visualization research in the personal data context is sparse. This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech. We designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch. Through an exploratory study with 13 long-term Fitbit users, we examined how multimodal interaction helps participants explore their own health data. Participants successfully adopted multimodal interaction (i.e., speech and touch) for convenient and fluid data exploration. Based on the quantitative and qualitative findings, we discuss design implications and opportunities with multimodal interaction for better supporting visual data exploration on mobile devices.
https://doi.org/10.1145/3411764.3445421
Data physicalisations afford people the ability to directly interact with data using their hands, potentially achieving a more comprehensive understanding of a dataset. Due to their complex nature, the representation of graphs and networks could benefit from physicalisation, bringing the dataset from the digital world into the physical one. However, no empirical work exists investigating the effects physicalisations have upon comprehension as they relate to graph representations. In this work, we present initial design considerations for graph physicalisations, as well as an empirical study investigating differences in comprehension between virtual and physical representations. We found that participants perceived themselves as being more accurate via touch and sight (visual-haptic) than the graphical-only modality, and perceived a triangle count task as less difficult in visual-haptic than in the graphical-only modality. Additionally, we found that participants significantly preferred interacting with visual-haptic over other conditions, despite no significant effect on task time or error.
https://doi.org/10.1145/3411764.3445704
Natural language interfaces (NLIs) for data visualization are becoming increasingly popular both in academic research and in commercial software. Yet, there is a lack of empirical understanding of how people specify visualizations through natural language. We conducted an online study (N = 102), showing participants a series of visualizations and asking them to provide utterances they would pose to generate the displayed charts. From the responses, we curated a dataset of 893 utterances and characterized the utterances according to (1) their phrasing (e.g., commands, queries, questions) and (2) the information they contained (e.g., chart types, data aggregations). To help guide future research and development, we contribute this utterance dataset and discuss its applications toward the creation and benchmarking of NLIs for visualization.
https://doi.org/10.1145/3411764.3445400
We explore network visualisation on a two-dimensional torus topology that continuously wraps when the viewport is panned. That is, links may be “wrapped” across the boundary, allowing additional spreading of node positions to reduce visual clutter. Recent work has investigated such pannable wrapped visualisations, finding them not worse than unwrapped drawings for small networks for path-following tasks. However, they did not evaluate larger networks nor did they consider whether torus-based layout might also better display high-level network structure like clusters. We offer two algorithms for improving toroidal layout that is completely autonomous and automatic panning of the viewport to minimiswe wrapping links. The resulting layouts afford fewer crossings, less stress, and greater cluster separation. In a study of 32 participants comparing performance in cluster understanding tasks, we find that toroidal visualisation offers significant benefits over standard unwrapped visualisation in terms of improvement in error by 62.7% and time by 32.3%.
https://doi.org/10.1145/3411764.3445439
Sketchnoting is a form of visual note taking where people listen to, synthesize, and visualize ideas from a talk or other event using a combination of pictures, diagrams, and text. Little is known about the design space of this kind of visual note taking. With an eye towards informing the implementation of digital equivalents of sketchnoting, inking, and note taking, we introduce a classification of sketchnote styles and techniques, with a qualitative analysis of 103 sketchnotes, and situated in context with six semi-structured follow up interviews. Our findings distill core sketchnote components (content, layout, structuring elements, and visual styling) and dimensions of the sketchnote design space, classifying levels of conciseness, illustration, structure, personification, cohesion, and craftsmanship. We unpack strategies to address particular note taking challenges, for example dealing with constraints of live drawings, and discuss relevance for future digital inking tools, such as recomposition, styling, and design suggestions.
https://doi.org/10.1145/3411764.3445508
Plots and tables are commonplace in today's data-driven world, and much research has been done on how to make these figures easy to read and understand. Often times, however, the information they contain conveys only the end result of a complex and subtle data analysis pipeline. This can leave the reader struggling to understand what steps were taken to arrive at a figure, and what implications this has for the underlying results. In this paper, we introduce datamations, which are animations designed to explain the steps that led to a given plot or table. We present the motivation and concept behind datamations, discuss how to programmatically generate them, and provide the results of two large-scale randomized experiments investigating how datamations affect people's abilities to understand potentially puzzling results compared to seeing only final plots and tables containing those results.
https://doi.org/10.1145/3411764.3445063
We present MARVIS, a conceptual framework that combines mobile devices and head-mounted Augmented Reality (AR) for visual data analysis. We propose novel concepts and techniques addressing visualization-specific challenges. By showing additional 2D and 3D information around and above displays, we extend their limited screen space. AR views between displays as well as linking and brushing are also supported, making relationships between separated visualizations plausible. We introduce the design process and rationale for our techniques. To validate MARVIS' concepts and show their versatility and widespread applicability, we describe six implemented example use cases. Finally, we discuss insights from expert hands-on reviews. As a result, we contribute to a better understanding of how the combination of one or more mobile devices with AR can benefit visual data analysis. By exploring this new type of visualization environment, we hope to provide a foundation and inspiration for future mobile data visualizations.
https://doi.org/10.1145/3411764.3445593
Recent research in the area of immersive analytics demonstrated the utility of head-mounted augmented reality devices for visual data analysis. However, it can be challenging to use the by default supported mid-air gestures to interact with visualizations in augmented reality (e.g. due to limited precision). Touch-based interaction (e.g. via mobile devices) can compensate for these drawbacks, but is limited to two-dimensional input. In this work we present STREAM: Spatially-aware Tablets combined with Augmented Reality Head-Mounted Displays for the multimodal interaction with 3D visualizations. We developed a novel eyes-free interaction concept for the seamless transition between the tablet and the augmented reality environment. A user study reveals that participants appreciated the novel interaction concept, indicating the potential for spatially-aware tablets in augmented reality. Based on our findings, we provide design insights to foster the application of spatially-aware touch devices in augmented reality and research implications indicating areas that need further investigation.
https://doi.org/10.1145/3411764.3445298
In this paper, we present MIRIA, a Mixed Reality Interaction Analysis toolkit designed to support the in-situ visual analysis of user interaction in mixed reality and multi-display environments. So far, there are few options to effectively explore and analyze interaction patterns in such novel computing systems. With MIRIA, we address this gap by supporting the analysis of user movement, spatial interaction, and event data by multiple, co-located users directly in the original environment. Based on our own experiences and an analysis of the typical data, tasks, and visualizations used in existing approaches, we identify requirements for our system. We report on the design and prototypical implementation of MIRIA, which is informed by these requirements and offers various visualizations such as 3D movement trajectories, position heatmaps, and scatterplots. To demonstrate the value of MIRIA for real-world analysis tasks, we conducted expert feedback sessions using several use cases with authentic study data.
https://doi.org/10.1145/3411764.3445651
Composite data physicalizations allow for the physical reconfiguration of data points, creating new opportunities for interaction and engagement. However, there is a lack of understanding of people's strategies and behaviors when directly manipulating physical data objects. In this paper, we systematically characterize different reconfiguration strategies using six exemplar physicalizations. We asked 20 participants to reorganize these exemplars with two levels of restriction: changing a single data object versus changing multiple data objects. Our findings show that there were two main reconfiguration strategies used: changes in proximity and changes in atomic orientation. We further characterize these using concrete examples of participant actions in relation to the structure of the physicalizations. We contribute an overview of reconfiguration strategies, which informs the design of future manually reconfigurable and dynamic composite physicalizations.
https://doi.org/10.1145/3411764.3445746
Virtual pets are an alternative to real pets, providing a substitute for people with allergies or preparing people for adopting a real pet. Recent advancements in mixed reality pave the way for virtual pets to provide a more natural and seamless experience for users. However, one key challenge is embedding environmental awareness into the virtual pet (e.g., identifying the food bowl's location) so that they can behave naturally in the real world. We propose a novel approach to synthesize virtual pet behaviors by considering scene semantics, enabling a virtual pet to behave naturally in mixed reality. Given a scene captured from the real world, our approach synthesizes a sequence of pet behaviors (e.g., resting after eating). Then, we assign each behavior in the sequence to a location in the real scene. We conducted user studies to evaluate our approach, which showed the efficacy of our approach in synthesizing natural virtual pet behaviors.
https://doi.org/10.1145/3411764.3445532
A common task for scatterplots is communicating trends in bivariate data. However, the ability of people to visually estimate these trends is under-explored, especially when the data violate assumptions required for common statistical models, or visual trend estimates are in conflict with statistical ones. In such cases, designers may need to intervene and de-bias these estimations, or otherwise inform viewers about differences between statistical and visual trend estimations. We propose data-driven mark orientation as a solution in such cases, where the directionality of marks in the scatterplot guide participants when visual estimation is otherwise unclear or ambiguous. Through a set of laboratory studies, we investigate trend estimation across a variety of data distributions and mark directionalities, and find that data-driven mark orientation can help resolve ambiguities in visual trend estimates.
https://doi.org/10.1145/3411764.3445751