User-Driven Constraints for Layout Optimisation in Augmented Reality
説明

Automatic layout optimisation allows users to arrange augmented reality content in the real-world environment without the need for tedious manual interactions. This optimisation is often based on modelling the intended content placement as constraints, defined as cost functions. Then, applying a cost minimization algorithm leads to a desirable placement. However, such an approach is limited by the lack of user control over the optimisation results. In this paper we explore the concept of user-driven constraints for augmented reality layout optimisation. With our approach users can define and set up their own constraints directly within the real-world environment. We first present a design space composed of three dimensions: the constraints, the regions of interest and the constraint parameters. Then we explore which input gestures can be employed to define the user-driven constraints of our design space through a user elicitation study. Using the results of the study, we propose a holistic system design and implementation demonstrating our user-driven constraints, which we evaluate in a final user study where participants had to create several constraints at the same time to arrange a set of virtual contents.

日本語まとめ
読み込み中…
読み込み中…
InstruMentAR: Auto-Generation of Augmented Reality Tutorials for Operating Digital Instruments Through Recording Embodied Demonstration
説明

Augmented Reality tutorials, which provide necessary context by directly superimposing visual guidance on the physical referent, represent an effective way of scaffolding complex instrument operations. However, current AR tutorial authoring processes are not seamless as they require users to continuously alternate between operating instruments and interacting with virtual elements. We present InstruMentAR, a system that automatically generates AR tutorials through recording user demonstrations. We design a multimodal approach that fuses gestural information and hand-worn pressure sensor data to detect and register the user's step-by-step manipulations on the control panel. With this information, the system autonomously generates virtual cues with designated scales to respective locations for each step. Voice recognition and background capture are employed to automate the creation of text and images as AR content. For novice users receiving the authored AR tutorials, we facilitate immediate feedback through haptic modules. We compared InstruMentAR with traditional systems in the user study.

日本語まとめ
読み込み中…
読み込み中…
BirdViewAR: Surroundings-aware Remote Drone Piloting Using an Augmented Third-person Perspective
説明

We propose BirdViewAR, a surroundings-aware remote drone-operation system that provides significant spatial awareness to pilots through an augmented third-person view (TPV) from an autopiloted secondary follower drone. The follower drone responds to the main drone's motions and directions using our optimization-based autopilot, allowing the pilots to clearly observe the main drone and its imminent destination without extra input. To improve their understanding of the spatial relationships between the main drone and its surroundings, the TPV is visually augmented with AR-overlay graphics, where the main drone's spatial statuses are highlighted: its heading, altitude, ground position, camera field-of-view (FOV), and proximity areas. We discuss BirdViewAR's design and implement its proof-of-concept prototype using programmable drones. Finally, we conduct a preliminary outdoor user study and find that BirdViewAR effectively increased spatial awareness and piloting performance.

日本語まとめ
読み込み中…
読み込み中…
Location-Aware Adaptation of Augmented Reality Narratives
説明

The recent popularity of augmented reality (AR) devices has enabled players to participate in interactive narratives through virtual events and characters populated in a real-world environment, where different actions may lead to different story branches. In this paper, we propose a novel approach to adapt narratives to real spaces for AR experiences. Our optimization-based approach automatically assigns contextually compatible locations to story events, synthesizing a navigation graph to guide players through different story branches while considering their walking experiences. We validated the effectiveness of our approach for adapting AR narratives to different scenes through experiments and user studies.

日本語まとめ
読み込み中…
読み込み中…
PointShopAR: Supporting Environmental Design Prototyping Using Point Cloud in Augmented Reality
説明

We present PointShopAR, a novel tablet-based system for AR environmental design using point clouds as the underlying representation. It integrates point cloud capture and editing in a single AR workflow to help users quickly prototype design ideas in their spatial context. We hypothesize that point clouds are well suited for prototyping, as they can be captured more rapidly than textured meshes and then edited immediately in situ on the capturing device. We based the design of PointShopAR on the practical needs of six architects in a formative study. Our system supports a variety of point cloud editing operations in AR, including selection, transformation, hole filling, drawing, morphing, and animation. We evaluate PointShopAR through a remote study on usability and an in-person study on environmental design support. Participants were able to iterate design rapidly, showing the merits of an integrated capture and editing workflow with point clouds in AR environmental design.

日本語まとめ
読み込み中…
読み込み中…
VRGit: A Version Control System for Collaborative Content Creation in Virtual Reality
説明

Immersive authoring tools allow users to intuitively create and manipulate 3D scenes while immersed in Virtual Reality (VR). Collaboratively designing these scenes is a creative process that involves numerous edits, explorations of design alternatives, and frequent communication with collaborators. Version Control Systems (VCSs) help users achieve this by keeping track of the version history and creating a shared hub for communication. However, most VCSs are unsuitable for managing the version history of VR content because their underlying line differencing mechanism is designed for text and lacks the semantic information of 3D content; and the widely adopted commit model is designed for asynchronous collaboration rather than real-time awareness and communication in VR. We introduce VRGit, a new collaborative VCS that visualizes version history as a directed graph composed of 3D miniatures, and enables users to easily navigate versions, create branches, as well as preview and reuse versions directly in VR. Beyond individual uses, VRGit also facilitates synchronous collaboration in VR by providing awareness of users’ activities and version history through portals and shared history visualizations. In a lab study with 14 participants (seven groups), we demonstrate that VRGit enables users to easily manage version history both individually and collaboratively in VR.

日本語まとめ
読み込み中…
読み込み中…