GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
説明

This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures.

We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.

日本語まとめ
読み込み中…
読み込み中…
AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies
説明

Automotive user interface (AUI) evaluation becomes increasingly complex due to novel interaction modalities, driving automation, heterogeneous data, and dynamic environmental contexts. Immersive analytics may enable efficient explorations of the resulting multilayered interplay between humans, vehicles, and the environment. However, no such tool exists for the automotive domain. With AutoVis, we address this gap by combining a non-immersive desktop with a virtual reality view enabling mixed-immersive analysis of AUIs. We identify design requirements based on an analysis of AUI research and domain expert interviews (N=5). AutoVis supports analyzing passenger behavior, physiology, spatial interaction, and events in a replicated study environment using avatars, trajectories, and heatmaps. We apply context portals and driving-path events as automotive-specific visualizations. To validate AutoVis against real-world analysis tasks, we implemented a prototype, conducted heuristic walkthroughs using authentic data from a case study and public datasets, and leveraged a real vehicle in the analysis process.

日本語まとめ
読み込み中…
読み込み中…
PEARL: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis
説明

This paper presents PEARL, a mixed-reality approach for the analysis of human movement data in situ. As the physical environment shapes human motion and behavior, the analysis of such motion can benefit from the direct inclusion of the environment in the analytical process. We present methods for exploring movement data in relation to surrounding regions of interest, such as objects, furniture, and architectural elements. We introduce concepts for selecting and filtering data through direct interaction with the environment, and a suite of visualizations for revealing aggregated and emergent spatial and temporal relations. More sophisticated analysis is supported through complex queries comprising multiple regions of interest. To illustrate the potential of PEARL, we developed an Augmented Reality-based prototype and conducted expert review sessions and scenario walkthroughs in a simulated exhibition. Our contribution lays the foundation for leveraging the physical environment in the in-situ analysis of movement data.

日本語まとめ
読み込み中…
読み込み中…
Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop Platforms
説明

Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining \textit{group awareness} between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user's 3D workspace. To address this, we propose the ``eyes-and-shoes'' principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on OSF (osf.io/wgprb/).

日本語まとめ
読み込み中…
読み込み中…
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents
説明

Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.

日本語まとめ
読み込み中…
読み込み中…
DataDancing: An Exploration of the Design Space For Visualisation View Management for 3D Surfaces and Spaces
説明

Recent studies have explored how users of immersive visualisation systems arrange data representations in the space around them. Generally, these have focused on placement centred at eye-level in absolute room coordinates. However, work in HCI exploring full-body interaction has identified zones relative to the user's body with different roles. We encapsulate the possibilities for visualisation view management into a design space (called “DataDancing”). From this design space we extrapolate a variety of view management prototypes, each demonstrating a different combination of interaction techniques and space use. The prototypes are enabled by a full-body tracking system including novel devices for torso and foot interaction. We explore four of these prototypes, encompassing standard wall and table-style interaction as well as novel foot interaction, in depth through a qualitative user study. Learning from the results, we improve the interaction techniques and propose two hybrid interfaces that demonstrate interaction possibilities of the design space.

日本語まとめ
読み込み中…
読み込み中…