Immersive and Spatial Visualization

会議の名前
CHI 2026
Visualising Pianists' Touch: Transcribing Expressive Piano Performance from Audio to Piano Key Motion
要旨

Detailed measurements of piano key motion capture touch, timing, and dynamic control, providing crucial performance insights. Such expressive gestures are overlooked in MIDI, which only records pitch onset, duration, and velocity. Here, we introduce a novel transcription technique that directly maps audio from expressive piano performance to continuous piano key motion. User studies reveal a preference to the transcribed key motion trajectories over MIDI in representing sound, and over 80% accuracy in matching transcribed trajectories to audio from contrasting piano expressions. Follow-up interviews further indicate that the visualised trajectories can reveal subtle performance nuances and provide actionable guidance for both teaching and practice. An interface example for pedagogy and performance analysis utilising our technique is also illustrated. By providing a physically grounded performance representation that musicians can interpret and act upon, this work establishes a foundation for future interactive tools in music pedagogy, performance feedback, and embodied musical learning.

著者
Jingjing Tang
Queen Mary University of London, London, United Kingdom
Shinichi Furuya
Sony Computer Science Laboratories Inc., Shinagawa, Tokyo, Japan
Hayato Nishioka
Sony Computer Science Laboratories Inc., Tokyo, Japan
Momoko Shioki
Sony Computer Science Laboratories Inc., Shinagawa, Tokyo, Japan
Geraint A.. Wiggins
Queen Mary University of London, London, United Kingdom
George Fazekas
Queen Mary University of London, London, United Kingdom
Vincent K.M.. Cheung
Sony Computer Science Laboratories Inc., Shinagawa, Tokyo, Japan
留白 (Liubai) at a Hushed Sanctuary: Layered Reflections on an Artist Residency
要旨

Considering that silence has long been intertwined with ritual and spiritual practice, we explore how digital technology might support silence thereby allowing space for reflection, attunement, and meaning-making. How does the Chinese aesthetic concept of liubai (留白, “empty space”) open up new ways of designing for noticing and reflection? In this paper, we present lived experiences of shared silence and meditation within a one-month artist residency. By weaving together field study with interview data, first-person inquiry and artistic artefacts, we offer empirical insights at the intersection of art, spirituality, and HCI. Through this study, the residency became a site to both experiment with artistic practice and explore silence as a positive and creative practice for attentive noticing. We discuss dwelling in the in-between, the art of liubai in design, a technical inward turn, and posthuman perspectives to inform a design agenda for techno-spirituality with broader implications for future research in HCI.

著者
Xiaran Song
Aalto University, Espoo, Finland
Caroline Claisse
Newcastle University, Newcastle upon Tyne, United Kingdom
Andrés Lucero
Aalto University, Espoo, Finland
Anu.js: Accelerating Web-based Immersive Analytics
要旨

We present Anu.js, a toolkit for web-based immersive analytics (IA). The IA design space is vast, multi-faceted, and everchanging, challenging development in the absence of robust authoring support. The web is a popular platform for visualization applications, research, and teaching, and by leveraging the benefits of web technologies and adopting imperative authoring paradigms we can achieve the necessary expressiveness, compatibility, and ergonomics to support IA research and development. Anu.js adapts D3’s data-binding model to 3D contexts, granting fine-grained control over the creation, representation, animation, performance, and interaction of 3D scene-graphs. Additionally, Anu.js offers declarative prefabs to support common visualization elements and interactions, and synergizes with popular visualization libraries which allows developers to leverage these proven utilities. We demonstrate Anu.js’s potential through our diverse example gallery, expert evaluation, and potential future applications. Through this, Anu.js empowers developers in accelerating the creation of novel and bespoke visualizations for immersive web-based applications.

著者
David Saffo
J.P. Morgan Chase & Co., New York, New York, United States
Benjamin Lee
JPMorganChase, New York, New York, United States
Feiyu Lu
J.P. Morgan Chase & Co., New York, New York, United States
Cheng Yao Wang
JPMorgan Chase & Co., New York, New York, United States
Blair MacIntyre
JPMorgan Chase and Co, New York, New York, United States
SVATA: A Spatial Visual Attention Tracking and Analysis Platform for Embodied Cognition Research
要旨

Understanding spatial visual attention is important for embodied cognition research, yet practical platforms for 3D attention analysis remain limited. We present SVATA—Spatial Visual Attention Tracking and Analysis, an open-source platform that supports an end-to-end workflow for collecting, analyzing, and visualizing world-referenced gaze-and-movement data within a 3D spatial context. SVATA maps multimodal signals (gaze, head, position) onto reconstructed geometry and computes a physiologically informed Average Focus Weight (AFW/m²) metric as a proxy for overt visual focus. This representation supports structured analysis and multidimensional visualization of spatial viewing patterns. We evaluated SVATA through an in-the-wild museum deployment with 78 visitors and an expert study with seven prospective users; the results suggest its feasibility and perceived utility for analyzing spatial viewing behavior in embodied cognition research.

著者
Xuchao Ren
Tsinghua University, Beijing, China
Jing Huang
Tsinghua University, Beijing, China
Siyuan Feng
Tsinghua University, Beijing, China
Jiangtao Gong
Tsinghua University, Beijing, China
Sai Ma
Tsinghua University, Beijing, Beijing, China
Yi Wei
Institute of Human Factors and Human-System Interaction, Beijing, China
SCORE: A Framework for Quantifying Diegesis in Situated Visualization for Augmented Reality
要旨

A central goal of Augmented Reality (AR)-based Situated Visualization (SV) is to seamlessly integrate digital information into its relevant physical context. While existing frameworks describe numerous design dimensions, the field lacks a rigorous model to evaluate this integration. To address this, we introduce SCORE, a framework for quantifying diegesis - a narratology concept describing the extent to which an element belongs to its narrative world. Grounded in a systematic analysis of 50 contemporary SV works, SCORE defines five dimensions of diegesis in SV: Spatial proximity, Concreteness, cOherence, Referential context, and Environmental context. In addition to qualitative comparison, our framework also provides a quantitative measure of diegesis, enabling SCORE to distinguish AR-based SVs that prior models have treated as theoretically equivalent. We validate the framework by demonstrating a consistent correlation between higher scores and positive usability outcomes. Based on these findings, we offer insights for SV designers.

著者
Tarik Hasan
The University of British Columbia, KELOWNA, British Columbia, Canada
Khalad Hasan
University of British Columbia, Kelowna, British Columbia, Canada
Barrett Ens
The University of British Columbia (Okanagan Campus), Kelowna, British Columbia, Canada
Less is More! Visual Suppression for Bottom-up and Top-down Attention in Dynamic Environments
要旨

Dynamic virtual environments pose growing challenges for users who must manage attention across competing visual elements, where distractors can divert focus from relevant objects in these scenarios. Because human attention functions as a filter, it is shaped by competing influences from bottom-up salience and top-down relevance. We explore the salience and relevance of objects and introduce suppression-based visual filtering mechanisms, implemented through Dim and Blur visual filters at Weak and Strong intensity levels. A controlled abstract virtual environment with colorful moving objects was used to evaluate these against Baseline (no filtering) across nine varied salience-relevance situations, involving 38 participants in visual search and sustained monitoring tasks. Results showed that visual suppression enhanced participants' attention over Baseline, with Dim outperforming Blur, Strong exceeding Weak, and Dim-Strong achieving superior performance overall. These findings imply the principle of attention redistribution and offer insights for domains involving objects with varying salience and relevance.

著者
Chenkai Zhang
Adelaide University, Adelaide, South Australia, Australia
Ruochen Cao
Taiyuan University of Technology, Taiyuan, Shanxi, China
Andrew Cunningham
Adelaide University, Adelaide, South Australia, Australia
James A.. Walsh
Adelaide University, Adelaide, South Australia, Australia
Can AR Embedded Visualizations Foster Appropriate Reliance on AI in Spatial Decision-Making? A Comparative Study of AR X-Ray vs. 2D Minimap
要旨

Artificial Intelligence (AI) and indoor sensing increasingly support decision-making in spatial environments. However, traditional visualization methods impose a substantial mental workload when viewers translate this digital information into real-world spaces, leading to inappropriate reliance on AI. Embedded visualizations in Augmented Reality (AR), by integrating information into physical environments, may reduce this workload and foster more appropriate reliance on AI. To assess this, we conducted an empirical study (N = 32) comparing an AR embedded visualization (X-ray) and 2D Minimap in AI-assisted, time-critical spatial target selection tasks. Surprisingly, evidence shows that the embedded visualization led to greater inappropriate reliance on AI, primarily as over-reliance, due to factors like perceptual challenges, visual proximity illusions, and highly realistic visual representations. Nonetheless, the embedded visualization showed benefits in spatial mapping. We conclude by discussing empirical insights, design implications, and directions for future research on human-AI collaborative decision in AR.

著者
Xianhao Carton Liu
University of Minnesota, Minneapolis, Minnesota, United States
Difan Jia
University of Minnesota, Minneapolis, Minnesota, United States
Tongyu Nie
University of Minnesota, Minneapolis, Minnesota, United States
Evan Suma Rosenberg
University of Minnesota, Minneapolis, Minnesota, United States
Victoria Interrante
University of Minnesota, Minneapolis, Minnesota, United States
Chen Zhu-Tian
University of Minnesota-Twin Cities, Minneapolis, Minnesota, United States