Proper placement of sensors and actuators is one of the key factors when designing spatial and proxemic interactions. However, current physical computing tools do not effectively support placing components in three-dimensional space, often forcing designers to build and test prototypes without precise spatial configuration. To address this, we propose the concept of spatial physical computing and present SpatIO, an XR-based physical computing toolkit that supports a continuous end-to-end workflow. SpatIO consists of three interconnected subsystems: SpatIO Environment for composing and testing prototypes with virtual sensors and actuators, SpatIO Module for converting virtually placed components into physical ones, and SpatIO Code for authoring interactions with spatial visualization of data flow. Through a comparative user study with 20 designers, we found that SpatIO significantly altered workflow order, encouraged broader exploration of component placement, enhanced spatial correlation between code and components, and promoted in-situ bodily testing.
https://dl.acm.org/doi/10.1145/3706598.3713747
Visual storytelling combines visuals and narratives to communicate important insights. While web-based visual storytelling is well-established, leveraging the next generation of digital technologies for visual storytelling, specifically immersive technologies, remains underexplored. We investigated the impact of the story viewpoint (from the audience's perspective) and navigation (when progressing through the story) on spatial immersion and understanding. First, we collected web-based 3D stories and elicited design considerations from three VR developers. We then adapted four selected web-based stories to an immersive format. Finally, we conducted a user study (N=24) to examine egocentric and exocentric viewpoints, active and passive navigation, and the combinations they form. Our results indicated significantly higher preferences for egocentric+active (higher agency and engagement) and exocentric+passive (higher focus on content). We also found a marginal significance of viewpoints on story understanding and a strong significance of navigation on spatial immersion.
https://dl.acm.org/doi/10.1145/3706598.3713849
The Steering Law has long been a fundamental model in predicting movement time for tasks involving navigating through constrained paths, such as in selecting sub-menu options, particularly for straight and circular arc trajectories. However, this does not reflect the complexities of real-world tasks where curvatures can vary arbitrarily, limiting its applications. This study aims to address this gap by introducing the total curvature parameter K into the equation to account for the overall curviness characteristic of a path. To validate this extension, we conducted a mouse-steering experiment on fixed-width paths with varying lengths and curviness levels. Our results demonstrate that the introduction of K significantly improves model fitness for movement time prediction over traditional models. These findings advance our understanding of movement in complex environments and support potential applications in fields like speech motor control and virtual navigation.
https://dl.acm.org/doi/10.1145/3706598.3713102
Collaborative Mixed Reality (MR) enables embodied meetings for distributed collaborators working across a variety of locations. However, providing a coherent experience for all users regardless of the spatial configurations of their respective physical environments is a central challenge. We present the Spatial Heterogeneity Framework, which breaks the problem into four core components: the activity zones, heterogeneity ladder, blended proxemics, and MR solutions matrix. We explain the interplay between these components, demonstrating their interconnectivity via a case study. Our framework enables researchers to navigate differences and trade-offs between solutions for distributed MR collaboration. It also supports designers to think about the role of space, technology, and social behaviours in MR collaboration. Ultimately, our contributions advance the field by conceptualising the challenges of spatial heterogeneity and strategies to overcome them.
https://dl.acm.org/doi/10.1145/3706598.3714033
We present a sensory substitution-based method for representing locations of remote objects in 3D space via haptics. By imitating auditory localization processes, we enable vibrotactile localization abilities similar to those of some spiders, elephants, and other species. We evaluated this concept in virtual reality by modulating the vibration amplitude of two controllers depending on relative locations to a target. We developed two implementations applying this method using either ear or hand locations. A proof-of-concept study assessed localization performance and user experience, achieving under 30° differentiation between horizontal targets with no prior training. This unique approach enables localization by using only two actuators, requires low computational power, and could potentially assist users in gaining spatial awareness in challenging environments. We compare the implementations and discuss the use of hands as ears in motion, a novel technique not previously explored in the sensory substitution literature.
https://dl.acm.org/doi/10.1145/3706598.3714083
Volumetric displays render true 3D graphics without forcing users to wear headsets or glasses. However, the optical diffusers that volumetric displays employ are rigid and thus do not allow for direct interaction. FlexiVol employs elastic diffusers to allow users to reach inside the display volume to have direct interaction with true 3D content. We explored various diffuser materials in terms of visual and mechanical properties. We correct the distortions of the volumetric graphics projected on elastic oscillating diffusers and propose a design space for FlexiVol, enabling various gestures and actions through direct interaction techniques. A user study suggests that selection, docking and tracing tasks can be performed faster and more precisely using direct interaction when compared to indirect interaction with a 3D mouse. Finally, applications such as a virtual pet or landscape edition highlight the advantages of a volumetric display that supports direct interaction.
https://dl.acm.org/doi/10.1145/3706598.3714315
Blind or low-vision (BLV) screen-reader users have a significantly limited experience interacting with desktop websites compared to non-BLV, i.e., sighted users. This digital divide is exacerbated by the incapability to browse the web spatially—an affordance that leverages spatial reasoning, which sighted users often rely on. In this work, we investigate the value of and opportunities for BLV screen-reader users to browse websites spatially (e.g., understanding page layouts). We additionally explore at-scale website layout understanding as a feature of desktop screen readers. We created a technology probe, WebNExt, to facilitate our investigation. Specifically, we conducted a lab study with eight participants and a five-day field study with four participants to evaluate spatial browsing using WebNExt. Our findings show that participants found spatial browsing intuitive and fulfilling, strengthening their connection to the design of web pages. Furthermore, participants envisioned spatial browsing as a step toward reducing the digital divide.
https://dl.acm.org/doi/10.1145/3706598.3714125