Designing AR/VR experiences

Paper session

会議の名前
CHI 2020
Pronto: Rapid Augmented Reality Video Prototyping Using Sketches and Enaction
要旨

Designers have limited tools to prototype AR experiences rapidly. Can lightweight, immediate tools let designers prototype dynamic AR interactions while capturing the nuances of a 3D experience? We interviewed three AR experts and identified several recurring issues in AR design: creating and positioning 3D assets, handling the changing user position, and orchestrating multiple animations. We introduce PROJECT PRONTO, a tablet-based video prototyping system that combines 2D video with 3D manipulation. PRONTO supports four intertwined activities: capturing 3D spatial information alongside a video scenario, positioning and sketching 2D drawings in a 3D world, and enacting animations with physical interactions. An observational study with professional designers shows that participants can use PRONTO to prototype diverse AR experiences. All participants performed two tasks: replicating a sample non-trivial AR experience and prototyping their open-ended designs. All participants completed the replication task and found PRONTO easy to use. Most participants found that PRONTO encourages more exploration of designs than their current practices.

キーワード
AR
Sketching
Video Prototyping
Design by Enaction
著者
Germán Leiva
Aarhus University, Aarhus, Denmark
Cuong Nguyen
Adobe Research, San Francisco, CA, USA
Rubaiat Habib Kazi
Adobe Research, Seattle, WA, USA
Paul Asente
Adobe Research, San Jose, CA, USA
DOI

10.1145/3313831.3376160

論文URL

https://doi.org/10.1145/3313831.3376160

動画
MRAT: The Mixed Reality Analytics Toolkit
要旨

Significant tool support exists for the development of mixed reality (MR) applications; however, there is a lack of tools for analyzing MR experiences. We elicit requirements for future tools through interviews with 8 university research, instructional, and media teams using AR/VR in a variety of domains. While we find a common need for capturing how users perform tasks in MR, the primary differences were in terms of heuristics and metrics relevant to each project. Particularly in the early project stages, teams were uncertain about what data should, and even could, be collected with MR technologies. We designed the Mixed Reality Analytics Toolkit (MRAT) to instrument MR apps via visual editors without programming and enable rapid data collection and filtering for visualizations of MR user sessions. With MRAT, we contribute flexible interaction tracking and task definition concepts, an extensible set of heuristic techniques and metrics to measure task success, and visual inspection tools with in-situ visualizations in MR. Focusing on a multi-user, cross-device MR crisis simulation and triage training app as a case study, we then show the benefits of using MRAT, not only for user testing of MR apps, but also performance tuning throughout the design process.

受賞
Best Paper
キーワード
Augmented/virtual reality
interaction tracking
user testing
著者
Michael Nebeling
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Maximilian Speicher
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Xizi Wang
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Shwetha Rajaram
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Brian D. Hall
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Zijian Xie
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Alexander R. E. Raistrick
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Michelle Aebersold
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Edward G. Happ
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Jiayin Wang
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Yanan Sun
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Lotus Zhang
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Leah E. Ramsier
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Rhea Kulkarni
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
DOI

10.1145/3313831.3376330

論文URL

https://doi.org/10.1145/3313831.3376330

動画
C-Space: An Interactive Prototyping Platform for Collaborative Spatial Design Exploration
要旨

C-Space is an interactive prototyping platform for collaborative spatial design exploration. Spatial design projects often begin with conceptualization that includes abstract diagramming, zoning, and massing to provide a foundation for making design decisions. Specifically, abstract diagrams guide designers to explore alternative designs without thinking prematurely about the details. However, complications arise when communicating ambiguous and incomplete designs to collaborators. To overcome this drawback, designers devote considerable amounts of time and resources into searching for design references and creating rough prototypes to explicate their design concepts better. Therefore, this study proposes C-Space, a novel design support system that integrates the abstract diagram with design reference retrieval and prototyping through a tangible user interface and augmented reality. Through a user study with 12 spatial designers, we verify that C-Space promotes rapid and robust spatial design exploration, inducing collaborative discussions and motivating users to interact with designs.

キーワード
Spatial Design
Design Support System
Design Collaboration
Prototyping
Tangible User Interface
Augmented Reality
Human-Computer Interaction
著者
Kihoon Son
Hanyang University, Seoul, Republic of Korea
Hwiwon Chun
Hanyang University, Seoul, Republic of Korea
Sojin Park
Team Interface, Seoul, Republic of Korea
Kyung Hoon Hyun
Hanyang University, Seoul, Republic of Korea
DOI

10.1145/3313831.3376452

論文URL

https://doi.org/10.1145/3313831.3376452

動画
XRDirector: A Role-Based Collaborative Immersive Authoring System
要旨

Immersive authoring is an increasingly popular technique to design AR/VR scenes because design and testing can be done concurrently. Most existing systems, however, are single-user and limited to either AR or VR, thus constrained in the interaction techniques. We present XRDirector, a role-based collaborative immersive authoring system that enables designers to freely express interactions using AR and VR devices as puppets to manipulate virtual objects in 3D physical space. In XRDirector, we adapt roles known from filmmaking to structure the authoring process and help coordinate multiple designers in immersive authoring tasks. We study how novice AR/VR creators can take advantage of the roles and modes in XRDirector to prototype complex scenes with animated 3D characters, light effects, and camera movements, and also simulate interactive system behavior in a Wizard of Oz style. XRDirector's design was informed by case studies around complex 3D movie scenes and AR/VR games, as well as workshops with novice AR/VR creators. We show that XRDirector makes it easier and faster to create AR/VR scenes without the need for coding, characterize the issues in coordinating designers between AR and VR, and identify the strengths and weaknesses of each role and mode to mitigate the issues.

キーワード
AR/VR
immersive authoring
mixed-reality collaboration
著者
Michael Nebeling
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Katy Lewis
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Yu-Cheng Chang
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Lihan Zhu
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Michelle Chung
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Piaoyang Wang
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
Janet Nebeling
University of Michigan – Ann Arbor, Ann Arbor, MI, USA
DOI

10.1145/3313831.3376637

論文URL

https://doi.org/10.1145/3313831.3376637

動画
Mixed Reality Light Fields for Interactive Remote Assistance
要旨

Remote assistance represents an important use case for mixed reality. With the rise of handheld and wearable devices, remote assistance has become practical in the wild. However, spontaneous provisioning of remote assistance requires an easy, fast and robust approach for capturing and sharing of unprepared environments. In this work, we make a case for utilizing interactive light fields for remote assistance. We demonstrate the advantages of object representation using light fields over conventional geometric reconstruction. Moreover, we introduce an interaction method for quickly annotating light fields in 3D space without requiring surface geometry to anchor annotations. We present results from a user study demonstrating the effectiveness of our interaction techniques, and we provide feedback on the usability of our overall system.

キーワード
Light Field
Mixed Reality
Augmented Reality
Annotations
3D User Interfaces
Interaction
Telepresence
Remote Assistance
著者
Peter Mohr
Graz University of Technology & VRVis GmbH, Graz, Austria
Shohei Mori
Graz University of Technology, Graz, Austria
Tobias Langlotz
University of Otago, Dunedin, New Zealand
Bruce H. Thomas
University of South Australia, Mawson Lakes, Australia
Dieter Schmalstieg
Graz University of Technology, Graz, Austria
Denis Kalkofen
Graz University of Technology, Graz, Austria
DOI

10.1145/3313831.3376289

論文URL

https://doi.org/10.1145/3313831.3376289