この勉強会は終了しました。ご参加ありがとうございました。
Reality Rifts are interfaces between the physical and the virtual reality, where incoherent observations of physical behavior lead users to imagine comprehensive and plausible end-to-end dynamics. Reality Rifts emerge in interactive physical systems that lack one or more components that are central to their operation, yet where the physical end-to-end interaction persists with plausible outcomes. Even in the presence of a Reality Rift, users can still interact with a system—much like they would with the unaltered and complete counterpart—leading them to implicitly infer the existence and imagine the behavior of the lacking components from observable phenomena and outcomes. Therefore, dynamic systems with Reality Rifts trigger doubt, curiosity, and rumination—a sense of wonder that users experience when observing a Reality Rift due to their innate curiosity.
In this paper, we explore how interactive systems can elicit and guide the user's imagination by integrating Reality Rifts. We outline the design process for opening a Reality Rift in interactive physical systems, describe the resulting design space, and explore it through six characteristic prototypes. To understand to what extent and with which qualities these prototypes indeed induce a sense of wonder during an interaction, we evaluated \projectName\ in the form of a field deployment with 50 participants. We discuss participants' behavior and derive factors for the implementation of future wonder-ful experiences.
This paper introduces Teachable Reality, an augmented reality (AR) prototyping tool for creating interactive tangible AR applications with arbitrary everyday objects. Teachable Reality leverages vision-based interactive machine teaching (e.g., Teachable Machine), which captures real-world interactions for AR prototyping. It identifies the user-defined tangible and gestural interactions using an on-demand computer vision model. Based on this, the user can easily create functional AR prototypes without programming, enabled by a trigger-action authoring interface. Therefore, our approach allows the flexibility, customizability, and generalizability of tangible AR applications that can address the limitation of current marker-based approaches. We explore the design space and demonstrate various AR prototypes, which include tangible and deformable interfaces, context-aware assistants, and body-driven AR applications. The results of our user study and expert interviews confirm that our approach can lower the barrier to creating functional AR prototypes while also allowing flexible and general-purpose prototyping experiences.
Mixed Reality allows for distributed meetings where people's local physical spaces are virtually aligned into blended interaction spaces. In many cases, people's physical rooms are dissimilar, making it challenging to design a coherent blended space. We introduce the concept of Partially Blended Realities (PBR) --- using Mixed Reality to support remote collaborators in partially aligning their physical spaces. As physical surfaces are central in collaborative work, PBR supports users in transitioning between different configurations of tables and whiteboard surfaces. In this paper, we 1) describe the design space of PBR, 2) present RealityBlender to explore interaction techniques for how users may configure and transition between blended spaces, and 3) provide insights from a study on how users experience transitions in a remote collaboration task. With this work, we demonstrate new potential for using partial solutions to tackle the alignment problem of dissimilar spaces in distributed Mixed Reality meetings.
New immersive 3D design tools enable the creation of spatial design recordings, capturing collaborative design activities. By reviewing captured spatial design sessions, which include user activities, workflows, and tool use, users can reflect on their own design processes, learn new workflows, and understand others' design rationale. However, finding interesting moments in design activities can be challenging: they contain multimodal data (such as user motion and logged events) occurring over time which can be difficult to specify when searching, and are typically distributed over many sessions or recordings. We present Tesseract, a Worlds-in-Miniature-based system to expressively query VR spatial design recordings. Tesseract consists of the Search Cube interface acting as a centralized stage-to-search container, and four querying tools for specifying multimodal data to enable users to find interesting moments in past design activities. We studied ten participants who used Tesseract and found support for our miniature-based stage-to-search approach.
The rapid advances in technologies have brought new interaction paradigms of smart objects (e.g., digital devices) beyond digital device screens. By utilizing spatial properties, configurations, and movements of smart objects, designing spatial interaction, which is one of the emerging interaction paradigms, efficiently promotes engagement with digital content and physical facility. However, as an important phase of design, prototyping such interactions still remains challenging, since there is no ad-hoc approach for this emerging paradigm. Designers usually rely on methods that require fixed hardware setup and advanced coding skills to script and validate early-stage concepts. These requirements restrict the design process to a limited group of users in indoor scenes. To facilitate the prototyping to general usages, we aim to figure out the design difficulties and underlying needs of current design processes for spatially-aware object interactions by empirical studies. Besides, we explore the design space of the spatial interaction for smart objects and discuss the design space in an input-output spatial interaction model. Based on these findings, we present ProObjAR, an all-in-one novel prototyping system with an Augmented Reality Head Mounted Display (AR-HMD). Our system allows designers to easily obtain the spatial data of smart objects being prototyped, specify spatially-aware interactive behaviors from an input-output event triggering workflow, and test the prototyping results in situ. From the user study, we find that ProObjAR simplifies the design procedure and increases design efficiency to a large extent and thus advancing the development of spatially-aware applications in smart ecosystems.
Edges are one of the most ubiquitous geometric features of physical objects. They provide accurate haptic feedback and easy-to-track features for camera systems, making them an ideal basis for Tangible User Interfaces (TUI) in Augmented Reality (AR). We introduce Ubi Edge, an AR authoring tool that allows end-users to customize edges on daily objects as TUI inputs to control varied digital functions. We develop an integrated AR device and an integrated vision-based detection pipeline that can track 3D edges and detect the touch interaction between fingers and edges. Leveraging the spatial awareness of AR, users can simply select an edge by sliding fingers along it and then make the edge interactive by
connecting it to various digital functions. We demonstrate four use cases including multi-function controllers, smart homes, games, and TUI-based tutorials. We also evaluated and proved our system’s usability through a two-session user study, where qualitative and
quantitative results are positive.