Detailed measurements of piano key motion capture touch, timing, and dynamic control, providing crucial performance insights. Such expressive gestures are overlooked in MIDI, which only records pitch onset, duration, and velocity. Here, we introduce a novel transcription technique that directly maps audio from expressive piano performance to continuous piano key motion. User studies reveal a preference to the transcribed key motion trajectories over MIDI in representing sound, and over 80% accuracy in matching transcribed trajectories to audio from contrasting piano expressions. Follow-up interviews further indicate that the visualised trajectories can reveal subtle performance nuances and provide actionable guidance for both teaching and practice. An interface example for pedagogy and performance analysis utilising our technique is also illustrated. By providing a physically grounded performance representation that musicians can interpret and act upon, this work establishes a foundation for future interactive tools in music pedagogy, performance feedback, and embodied musical learning.
Considering that silence has long been intertwined with ritual and spiritual practice, we explore how digital technology might support silence thereby allowing space for reflection, attunement, and meaning-making. How does the Chinese aesthetic concept of liubai (留白, “empty space”) open up new ways of designing for noticing and reflection? In this paper, we present lived experiences of shared silence and meditation within a one-month artist residency. By weaving together field study with interview data, first-person inquiry and artistic artefacts, we offer empirical insights at the intersection of art, spirituality, and HCI. Through this study, the residency became a site to both experiment with artistic practice and explore silence as a positive and creative practice for attentive noticing. We discuss dwelling in the in-between, the art of liubai in design, a technical inward turn, and posthuman perspectives to inform a design agenda for techno-spirituality with broader implications for future research in HCI.
We present Anu.js, a toolkit for web-based immersive analytics (IA). The IA design space is vast, multi-faceted, and everchanging, challenging development in the absence of robust authoring support. The web is a popular platform for visualization applications, research, and teaching, and by leveraging the benefits of web technologies and adopting imperative authoring paradigms we can achieve the necessary expressiveness, compatibility, and ergonomics to support IA research and development. Anu.js adapts D3’s data-binding model to 3D contexts, granting fine-grained control over the creation, representation, animation, performance, and interaction of 3D scene-graphs. Additionally, Anu.js offers declarative prefabs to support common visualization elements and interactions, and synergizes with popular visualization libraries which allows developers to leverage these proven utilities. We demonstrate Anu.js’s potential through our diverse example gallery, expert evaluation, and potential future applications. Through this, Anu.js empowers developers in accelerating the creation of novel and bespoke visualizations for immersive web-based applications.
Understanding spatial visual attention is important for embodied cognition research, yet practical platforms for 3D attention analysis remain limited. We present SVATA—Spatial Visual Attention Tracking and Analysis, an open-source platform that supports an end-to-end workflow for collecting, analyzing, and visualizing world-referenced gaze-and-movement data within a 3D spatial context. SVATA maps multimodal signals (gaze, head, position) onto reconstructed geometry and computes a physiologically informed Average Focus Weight (AFW/m²) metric as a proxy for overt visual focus. This representation supports structured analysis and multidimensional visualization of spatial viewing patterns. We evaluated SVATA through an in-the-wild museum deployment with 78 visitors and an expert study with seven prospective users; the results suggest its feasibility and perceived utility for analyzing spatial viewing behavior in embodied cognition research.
A central goal of Augmented Reality (AR)-based Situated Visualization (SV) is to seamlessly integrate digital information into its relevant physical context. While existing frameworks describe numerous design dimensions, the field lacks a rigorous model to evaluate this integration. To address this, we introduce SCORE, a framework for quantifying diegesis - a narratology concept describing the extent to which an element belongs to its narrative world. Grounded in a systematic analysis of 50 contemporary SV works, SCORE defines five dimensions of diegesis in SV: Spatial proximity, Concreteness, cOherence, Referential context, and Environmental context. In addition to qualitative comparison, our framework also provides a quantitative measure of diegesis, enabling SCORE to distinguish AR-based SVs that prior models have treated as theoretically equivalent. We validate the framework by demonstrating a consistent correlation between higher scores and positive usability outcomes. Based on these findings, we offer insights for SV designers.
Dynamic virtual environments pose growing challenges for users who must manage attention across competing visual elements, where distractors can divert focus from relevant objects in these scenarios. Because human attention functions as a filter, it is shaped by competing influences from bottom-up salience and top-down relevance. We explore the salience and relevance of objects and introduce suppression-based visual filtering mechanisms, implemented through Dim and Blur visual filters at Weak and Strong intensity levels. A controlled abstract virtual environment with colorful moving objects was used to evaluate these against Baseline (no filtering) across nine varied salience-relevance situations, involving 38 participants in visual search and sustained monitoring tasks. Results showed that visual suppression enhanced participants' attention over Baseline, with Dim outperforming Blur, Strong exceeding Weak, and Dim-Strong achieving superior performance overall. These findings imply the principle of attention redistribution and offer insights for domains involving objects with varying salience and relevance.
Artificial Intelligence (AI) and indoor sensing increasingly support decision-making in spatial environments.
However, traditional visualization methods impose a substantial mental workload when viewers translate this digital information into real-world spaces, leading to inappropriate reliance on AI. Embedded visualizations in Augmented Reality (AR), by integrating information into physical environments, may reduce this workload and foster more appropriate reliance on AI.
To assess this, we conducted an empirical study (N = 32) comparing an AR embedded visualization (X-ray) and 2D Minimap in AI-assisted, time-critical spatial target selection tasks. Surprisingly, evidence shows that the embedded visualization led to greater inappropriate reliance on AI, primarily as over-reliance, due to factors like perceptual challenges, visual proximity illusions, and highly realistic visual representations.
Nonetheless, the embedded visualization showed benefits in spatial mapping. We conclude by discussing empirical insights, design implications, and directions for future research on human-AI collaborative decision in AR.