Reality Refined: Augmented Reality Techniques

会議の名前
UIST 2023
RealityCanvas: Augmented Reality Sketching for Embedded and Responsive Scribble Animation Effects
要旨

We introduce RealityCanvas, a mobile AR sketching tool that can easily augment real-world physical motion with responsive hand-drawn animation. Recent research in AR sketching tools has enabled users to not only embed static drawings into the real world but also dynamically animate them with physical motion. However, existing tools often lack the flexibility and expressiveness of possible animations, as they primarily support simple line-based geometry. To address this limitation, we explore both expressive and improvisational AR sketched animation by introducing a set of responsive scribble animation techniques that can be directly embedded through sketching interactions: 1) object binding, 2) flip-book animation, 3) action trigger, 4) particle effects, 5) motion trajectory, and 6) contour highlight. These six animation effects were derived from the analysis of 172 existing video-edited scribble animations. We showcase these techniques through various applications, such as video creation, augmented education, storytelling, and AR prototyping. The results of our user study and expert interviews confirm that our tool can lower the barrier to creating AR-based sketched animation, while allowing creative, expressive, and improvisational AR sketching experiences.

著者
Zhijie Xia
University of Calgary, Calgary, Alberta, Canada
Kyzyl Monteiro
IIIT-Delhi, New Delhi, Delhi, India
Kevin Van
University of Calgary, Calgary, Alberta, Canada
Ryo Suzuki
University of Calgary, Calgary, Alberta, Canada
論文URL

https://doi.org/10.1145/3586183.3606716

動画
STAR: Smartphone-analogous Typing in Augmented Reality
要旨

While text entry is an essential and frequent task in Augmented Reality (AR) applications, devising an efficient and easy-to-use text entry method for AR remains an open challenge. This research presents STAR, a smartphone-analogous AR text entry technique that leverages a user's familiarity with smartphone two-thumb typing. With STAR, a user performs thumb typing on a virtual QWERTY keyboard that is overlain on the skin of their hands. During an evaluation study of STAR, participants achieved a mean typing speed of 21.9 WPM (i.e., 56% of their smartphone typing speed), and a mean error rate of 0.3% after 30 minutes of practice. We further analyze the major factors implicated in the performance gap between STAR and smartphone typing, and discuss ways this gap could be narrowed.

著者
Taejun Kim
Meta Inc., Toronto, Ontario, Canada
Amy Karlson
Meta Inc., Redmond, Washington, United States
Aakar Gupta
Meta Inc., Redmond, Washington, United States
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Jason Wu
Meta Inc., Toronto, Ontario, Canada
Parastoo Abtahi
Meta, Toronto, Ontario, Canada
Christopher Collins
Meta Reality Labs Research, Toronto, Ontario, Canada
Michael Glueck
Meta, Toronto, Ontario, Canada
Hemant Bhaskar. Surale
Meta Inc., Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3586183.3606803

動画
Reframe: An Augmented Reality Storyboarding Tool for Character-Driven Analysis of Security & Privacy Concerns
要旨

While current augmented reality (AR) authoring tools lower the technical barrier for novice AR designers, they lack explicit guidance to consider potentially harmful aspects of AR with respect to security & privacy (S&P). To address potential threats in the earliest stages of AR design, we developed Reframe, a digital storyboarding tool for designers with no formal training to analyze S&P threats. We accomplish this through a frame-based authoring approach, which captures and enhances storyboard elements that are relevant for threat modeling, and character-driven analysis tools, which personify S&P threats from an underlying threat model to provide simple abstractions for novice designers. Based on evaluations with novice AR designers and S&P experts, we find that Reframe enables designers to analyze threats and propose mitigation techniques that experts consider good quality. We discuss how Reframe can facilitate collaboration between designers and S\&P professionals and propose extensions to Reframe to incorporate additional threat models.

著者
Shwetha Rajaram
University of Michigan, Ann Arbor, Michigan, United States
Franziska Roesner
University of Washington, Seattle, Washington, United States
Michael Nebeling
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3586183.3606750

動画
PaperToPlace: Transforming Instruction Documents into Spatialized and Context-Aware Mixed Reality Experiences
要旨

While paper instructions are one of the mainstream medium for sharing knowledge, consuming such instructions and translating them into activities are inefficient due to the lack of connectivity with physical environment. We present PaperToPlace, a novel workflow comprising an authoring pipeline, which allows the authors to rapidly transform and spatialize existing paper instructions into MR experience, and a consumption pipeline, which computationally place each instruction step at an optimal location that is easy to read and do not occlude key interaction areas. Our evaluations of the authoring pipeline with 12 participants demonstrated the usability of our workflow and the effectiveness of using a machine learning based approach to help extracting the spatial locations associated with each steps. A second within-subject study with another 12 participants demonstrates the merits of our consumption pipeline by reducing efforts of context switching, delivering the segmented instruction steps and offering the hands-free affordances.

著者
Chen Chen
University of California San Diego, La Jolla, California, United States
Cuong Nguyen
Adobe Research, San Francisco, California, United States
Jane Hoffswell
Adobe Research, Seattle, Washington, United States
Jennifer Healey
Adobe Research, San Jose, California, United States
Trung Bui
Adobe Research, San Jose, California, United States
Nadir Weibel
University of California San Diego, La Jolla, California, United States
論文URL

https://doi.org/10.1145/3586183.3606832

動画
HoloBots: Augmenting Holographic Telepresence with Mobile Robots for Tangible Remote Collaboration in Mixed Reality
要旨

This paper introduces HoloBots, a mixed reality remote collaboration system that augments holographic telepresence with synchronized mobile robots. Beyond existing mixed reality telepresence, HoloBots lets remote users not only be visually and spatially present, but also \textit{physically} engage with local users and their environment. HoloBots allows the users to touch, grasp, manipulate, and interact with the remote physical environment as if they were co-located in the same shared space. We achieve this by synchronizing holographic user motion (Hololens 2 and Azure Kinect) with tabletop mobile robots (Sony Toio). Beyond the existing physical telepresence, HoloBots contributes to an exploration of broader design space, such as object actuation, virtual hand physicalization, world-in-miniature exploration, shared tangible interfaces, embodied guidance, and haptic communication. We evaluate our system with twelve participants by comparing it with hologram-only and robot-only conditions. Both quantitative and qualitative results confirm that our system significantly enhances the level of co-presence and shared experience, compared to the other conditions.

著者
Keiichi Ihara
University of Tsukuba, Tsukuba, Japan
Mehrad Faridan
University of Calgary, Calgary, Alberta, Canada
Ayumi Ichikawa
University of Tsukuba, Tsukuba, Japan
Ikkaku Kawaguchi
University of Tsukuba, Tsukuba, Ibaraki, Japan
Ryo Suzuki
University of Calgary, Calgary, Alberta, Canada
論文URL

https://doi.org/10.1145/3586183.3606727

動画
SwarmFidget: Exploring Programmable Actuated Fidgeting with Swarm Robots
要旨

We introduce the concept of programmable actuated fidgeting, a type of fidgeting that involves devices integrated with actuators, sensors, and computing to enable a customizable interactive fidgeting experience. In particular, we explore the potential of a swarm of tabletop robots as an instance of programmable actuated fidgeting as robots are becoming increasingly available. Through ideation sessions among researchers and feedback from the participants, we formulate the design space for SwarmFidget, where swarm robots are used to facilitate programmable actuated fidgeting. To gather user impressions, we conducted an exploratory study where we introduced the concept of SwarmFidget to twelve participants and had them experience and provide feedback on six example fidgeting interactions. Our study demonstrates the potential of SwarmFidget for facilitating fidgeting interaction and provides insights and guidelines for designing effective and engaging fidgeting interactions with swarm robots. We believe our work can inspire future research in the area of programmable actuated fidgeting and open up new opportunities for designing novel swarm robot-based fidgeting systems.

著者
Lawrence H. Kim
Simon Fraser University, Burnaby, British Columbia, Canada
Veronika Domova
Stanford University, Stanford, California, United States
Yuqi Yao
Stanford University, Stanford, California, United States
Parsa Rajabi
Simon Fraser University, Burnaby, British Columbia, Canada
論文URL

https://doi.org/10.1145/3586183.3606746

動画
AR-Enhanced Workouts: Exploring Visual Cues for At-Home Workout Videos in AR Environment
要旨

In recent years, with growing health consciousness, at-home workout has become increasingly popular for its convenience and safety. Most people choose to follow video guidance during exercising. However, our preliminary study revealed that fitness-minded people face challenges when watching exercise videos on handheld devices or fixed monitors, such as limited movement comprehension due to static camera angles and insufficient feedback. To address these issues, we reviewed popular workout videos, identified user requirements, and came up with an augmented reality (AR) solution. Following a user-centered iterative design process, we proposed a design space of AR visual cues for workouts and implemented an AR-based application. Specifically, we captured users’ exercise performance with pose-tracking technology and provided feedback via AR visual cues. Two user experiments showed that incorporating AR visual cues could improve movement comprehension and enable users to adjust their movements based on real-time feedback. Finally, we presented several suggestions to inspire future design and apply AR visual cues to sports training.

著者
Yihong Wu
Zhejiang University, Hangzhou, China
Lingyun Yu
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Jie Xu
ZheJiang University, Hangzhou, China
Dazhen Deng
Zhejiang University, Hangzhou, Zhejiang, China
Jiachen Wang
Zhejiang University, Hangzhou, Zhejiang, China
Xiao Xie
Zhejiang University, Hangzhou, Zhejiang, China
Hui Zhang
Zhejiang University, Hangzhou, Zhejiang Province, China
Yingcai Wu
Zhejiang University, Hangzhou, Zhejiang, China
論文URL

https://doi.org/10.1145/3586183.3606796

動画