AR and VR

[A] Paper Room 10, 2021-05-13 17:00:00~2021-05-13 19:00:00 / [B] Paper Room 10, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 10, 2021-05-14 09:00:00~2021-05-14 11:00:00

会議の名前
CHI 2021
A Design Space Exploration of Worlds in Miniature
要旨

Worlds-in-Miniature (WiMs) are interactive worlds within a world and combine the advantages of an input space, a cartographic map, and an overview+detail interface. They have been used across the extended virtuality spectrum for a variety of applications. Building on an analysis of examples of WiMs from the research literature we contribute a design space for WiMs based on seven design dimensions. Further, we expand upon existing definitions of WiMs to provide a definition that applies across the extended reality spectrum. We identify the design dimensions of size-scope-scale, abstraction, geometry, reference frame, links, multiples, and virtuality. Using our framework we describe existing Worlds-in-Miniature from the research literature and reveal unexplored research areas. Finally, we generate new examples of WiMs using our framework to fill some of these gaps. With our findings, we identify opportunities that can guide future research into WiMs.

著者
Kurtis Danyluk
University of Calgary, Calgary, Alberta, Canada
Barrett Ens
Monash University, Melbourne, Australia
Bernhard Jenny
Monash University, Melbourne, Australia
Wesley Willett
University of Calgary, Calgary, Alberta, Canada
DOI

10.1145/3411764.3445098

論文URL

https://doi.org/10.1145/3411764.3445098

動画
Armstrong: An Empirical Examination of Pointing at Non-Dominant Arm-Anchored UIs in Virtual Reality
要旨

In virtual reality (VR) environments, asymmetric bimanual interaction techniques can increase users' input bandwidth by complementing their perceptual and motor systems (e.g., using the dominant hand to select 3D UI controls anchored around the non-dominant arm). However, it is unclear how to optimize the layout of such 3D UI controls for near-body and mid-air interactions. We evaluate the performance and limitations of non-dominant arm-anchored 3D UIs in VR environments through a bimanual pointing study. Results demonstrated that targets appearing closer to the skin, located around the wrist, or placed on the medial side of the forearm could be selected more quickly than targets farther away from the skin, located around the elbow, or on the lateral side of the forearm. Based on these results, we developed Armstrong guidelines, demonstrated through a Unity plugin to enable designers to create performance-optimized arm-anchored 3D UI layouts.

著者
Zhen Li
University of Toronto, Toronto, Ontario, Canada
Joannes Chan
Chatham Labs, Toronto, Ontario, Canada
Joshua Walton
Facebook, Redmond, Washington, United States
Hrvoje Benko
Facebook, Redmond, Washington, United States
Daniel Wigdor
University of Toronto, Toronto, Ontario, Canada
Michael Glueck
Chatham Labs, Toronto, Ontario, Canada
DOI

10.1145/3411764.3445064

論文URL

https://doi.org/10.1145/3411764.3445064

動画
JetController: High-speed Ungrounded 3-DoF Force Feedback Controllers using Air Propulsion Jets
要旨

JetController is a novel haptic technology capable of supporting high-speed and persistent 3-DoF ungrounded force feedback. It uses high-speed pneumatic solenoid valves to modulate compressed air to achieve 20-50Hz of full impulses at 4.0-1.0N, and combines multiple air propulsion jets to generate 3-DoF force feedback. Compared to propeller-based approaches, JetController supports 10-30 times faster impulse frequency, and its handheld device is significantly lighter and more compact. JetController supports a wide range of haptic events in games and VR experiences, from firing automatic weapons in games like Halo (15Hz) to slicing fruits in Fruit Ninja (up to 45Hz). To evaluate JetController, we integrated our prototype with two popular VR games, Half-life: Alyx and Beat Saber, to support a variety of 3D interactions. Study results showed that JetController significantly improved realism, enjoyment, and overall experience compared to commercial vibrating controllers, and was preferred by most participants.

著者
Yu-Wei Wang
National Taiwan University, Taipei, Taiwan
Yu-Hsin Lin
National Taiwan University, Taipei, Taiwan
Pin-Sung Ku
National Taiwan University, Taipei, Taiwan
Yōko Miyatake
Ochanomizu University, Tokyo, Japan
Yi-Hsuan Mao
National Taiwan University, Taipei, Taiwan
Po-Yu Chen
National Taiwan University, Taipei, Taiwan
Chun-Miao Tseng
National Taiwan University, Taipei, Taiwan
Mike Y.. Chen
National Taiwan University, Taipei, Taiwan
DOI

10.1145/3411764.3445549

論文URL

https://doi.org/10.1145/3411764.3445549

動画
GamesBond: Bimanual Haptic Illusion of Physically Connected Objects for Immersive VR Using Grip Deformation
要旨

Virtual Reality experiences, such as games and simulations, typically support the usage of bimanual controllers to interact with virtual objects. To recreate the haptic sensation of holding objects of various shapes and behaviors with both hands, previous researchers have used mechanical linkages between the controllers that render adjustable stiffness. However, the linkage cannot quickly adapt to simulate dynamic objects, nor it can be removed to support free movements. This paper introduces GamesBond, a pair of 4-DoF controllers without physical linkage but capable to create the illusion of being connected as a single device, forming a virtual bond. The two controllers work together by dynamically displaying and physically rendering deformations of hand grips, and so allowing users to perceive a single connected object between the hands, such as a jumping rope. With a user study and various applications we show that GamesBond increases the realism, immersion, and enjoyment of bimanual interaction.

受賞
Honorable Mention
著者
Neung Ryu
KAIST, Daejeon, Korea, Republic of
Hye-Young Jo
KAIST, Daejeon, Korea, Republic of
Michel Pahud
Microsoft Research, Redmond, Washington, United States
Mike Sinclair
Microsoft, Redmond, Washington, United States
Andrea Bianchi
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3411764.3445727

論文URL

https://doi.org/10.1145/3411764.3445727

動画
Juicy Haptic Design: Vibrotactile Embellishments Can Improve Player Experience in Games
要旨

Game designers and researchers employ a sophisticated language for producing great player experiences with concepts such as juiciness, which refers to excessive positive feedback. However, much of their discourse excludes the role and value of haptic feedback. In this paper, we adapt terminology from game design to study haptic feedback. Specifically, we define haptic embellishments (HEs) as haptic feedback that reinforce information already provided through other means (e.g., via visual feedback) and juicy haptics as excessive positive haptic feedback with the intention of improving user experience in games and other interactive media. We report two empirical studies of users' experiences interacting with visuo-haptic content on their phones to 1) study participants' preferences for ten design principles for HEs and 2) measure the added value of juicy haptics, implemented as HEs, on player experience in a game. Results indicate that juicy haptics can enhance enjoyability, aesthetic appeal, immersion, and meaning.

著者
Tanay Singhal
University of Waterloo, Waterloo, Ontario, Canada
Oliver Schneider
University of Waterloo, Waterloo, Ontario, Canada
DOI

10.1145/3411764.3445463

論文URL

https://doi.org/10.1145/3411764.3445463

動画
Elevate: A Walkable Pin-Array for Large Shape-Changing Terrains
要旨

Current head-mounted displays enable users to explore virtual worlds by simply walking through them (i.e., real-walking VR). This led researchers to create haptic displays that can also simulate different types of elevation shapes. However, existing shape-changing floors are limited by their tabletop scale or the coarse resolution of the terrains they can display due to the limited number of actuators and low vertical resolution. To tackle this challenge, we introduce Elevate, a dynamic and walkable pin-array floor on which users can experience not only large variations in shapes but also the details of the underlying terrain. Our system achieves this by packing 1200 pins arranged on a 1.80 x 0.60m platform, in which each pin can be actuated to one of ten height levels (resolution: 15mm/level). To demonstrate its applicability, we present our haptic floor combined with four walkable applications and a user study that reported increased realism and enjoyment.

著者
Seungwoo Je
KAIST, Daejeon, Korea, Republic of
Hyunseung Lim
KAIST, Daejeon, Korea, Republic of
Kongpyung Moon
KAIST, Daejeon, Korea, Republic of
Shan-Yuan Teng
University of Chicago, Chicago, Illinois, United States
Jas Brooks
University of Chicago, Chicago, Illinois, United States
Pedro Lopes
University of Chicago, Chicago, Illinois, United States
Andrea Bianchi
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3411764.3445454

論文URL

https://doi.org/10.1145/3411764.3445454

動画
Locomotion Vault: the Extra Mile in Analyzing VR Locomotion Techniques
要旨

Numerous techniques have been proposed for locomotion in virtual reality (VR). Several taxonomies consider a large number of attributes (e.g., hardware, accessibility) to characterize these techniques. However, finding the appropriate locomotion technique (LT) and identifying gaps for future designs in the high-dimensional space of attributes can be quite challenging. To aid analysis and innovation, we devised Locomotion Vault (https://locomotionvault.github.io/), a database and visualization of over 100 LTs from academia and industry. We propose similarity between LTs as a metric to aid navigation and visualization. We show that similarity based on attribute values correlates with expert similarity assessments (a method that does not scale). Our analysis also highlights an inherent trade-off between simulation sickness and accessibility across LTs. As such, Locomotion Vault shows to be a tool that unifies information on LTs and enables their standardization and large-scale comparison to help understand the space of possibilities in VR locomotion.

著者
Massimiliano Di Luca
University of Birmingham, Birmingham, United Kingdom
Hasti Seifi
University of Copenhagen, Copenhagen, Denmark
Simon Egan
University of Washington, Seattle, Washington, United States
Mar Gonzalez-Franco
Microsoft Research, Redmond, Washington, United States
DOI

10.1145/3411764.3445319

論文URL

https://doi.org/10.1145/3411764.3445319

動画
Phonetroller: Visual Representations of Fingers for Precise Touch Input when using a Phone in VR
要旨

Smartphone touch screens are potentially attractive for interaction in virtual reality (VR). However, the user cannot see the phone or their hands in a fully immersive VR setting, impeding their ability for precise touch input. We propose mounting a mirror above the phone screen such that the front-facing camera captures the thumbs on or near the screen. This enables the creation of semi-transparent overlays of thumb shadows and inference of fingertip hover points with deep learning, which help the user aim for targets on the phone. A study compares the effect of visual feedback on touch precision in a controlled task and qualitatively evaluates three example applications demonstrating the potential of the technique. The results show that the enabled style of feedback is effective for thumb-size targets, and that the VR experience can be enriched by using smartphones as VR controllers supporting precise touch input.

著者
Fabrice Matulic
Preferred Networks Inc., Tokyo, Japan
Aditya Ganeshan
Preferred Networks Inc., Tokyo, Japan
Hiroshi Fujiwara
Preferred Networks Inc., Tokyo, Japan
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
DOI

10.1145/3411764.3445583

論文URL

https://doi.org/10.1145/3411764.3445583

動画
Ninja Hands: Using Many Hands to Improve Target Selection in VR
要旨

Selection and manipulation in virtual reality often happen using an avatar's hands. However, objects outside the immediate reach require effort to select. We develop a target selection technique called Ninja Hands. It maps the movement of a single real hand to many virtual hands, decreasing the distance to targets. We evaluate Ninja Hands in two studies. The first study shows that compared to a single hand, 4 and 8 hands are significantly faster for selecting targets. The second study complements this finding by using a larger target layout with many distractors. We find no decrease in selection time across 8, 27, and 64 hands, but an increase in the time spent deciding which hand to use. Thereby, net movement time still decreases significantly. In both studies, the physical motion exerted also decreases significantly with more hands. We discuss how these findings can inform future implementations of the Ninja Hands technique.

著者
Jonas Schjerlund
University of Copenhagen, Copenhagen, Denmark
Kasper Hornbæk
University of Copenhagen, Copenhagen, Denmark
Joanna Bergström
University of Copenhagen, Copenhagen, Denmark
DOI

10.1145/3411764.3445759

論文URL

https://doi.org/10.1145/3411764.3445759

動画
Comparison of Different Types of Augmented Reality Visualizations for Instructions
要旨

Augmented Reality (AR) is increasingly being used for providing guidance and supporting troubleshooting in industrial settings. While the general application of AR has been shown to provide clear benefits regarding physical tasks, it is important to understand how different visualization types influence user’s performance during the execution of the tasks. Previous studies evaluating AR and user’s performance compared different media types or types of AR hardware as opposed to different types of visualization for the same hardware type. This paper provides details of our comparative study in which we identified the influence of visualization types on the performance of complex machine set-up processes. Although our results show clear advantages to using concrete rather than abstract visualizations, we also find abstract visualizations coupled with videos leads to similar user performance as with concrete visualizations.

著者
Florian Jasche
University of Siegen, Siegen, Germany
Sven Hoffmann
University of Siegen, Siegen, Germany
Thomas Ludwig
University of Siegen, Siegen, Germany
Volker Wulf
Institute of Information Systems and New Media, Siegen, Germany
DOI

10.1145/3411764.3445724

論文URL

https://doi.org/10.1145/3411764.3445724

動画
vMirror: Enhancing the Interaction with Occluded or Distant Objects in VR with Virtual Mirrors
要旨

Interacting with out of reach or occluded VR objects can be cumbersome. Although users can change their position and orientation, such as via teleporting, to help observe and select, doing so frequently may cause loss of spatial orientation or motion sickness. We present vMirror, an interactive widget leveraging reflection of mirrors to observe and select distant or occluded objects. We first designed interaction techniques for placing mirrors and interacting with objects through mirrors. We then conducted a formative study to explore a semi-automated mirror placement method with manual adjustments. Next, we conducted a target-selection experiment to measure the effect of the mirror's orientation on users' performance. Results showed that vMirror can be as efficient as direct target selection for most mirror orientations. We further compared vMirror with teleport technique in a virtual treasure hunt game and measured participants’ task performance and subjective experiences. Finally, we discuss vMirorr user experience and present future directions.

著者
Nianlong Li
Institute of Software, Chinese Academy of Sciences, Beijing, China
Zhengquan Zhang
Xi'an Jiaotong University, Xi'an, China
Can Liu
City University of Hong Kong, Kowloon, Hong Kong
Zengyao Yang
Xi'an Jiaotong University, Xi'an, China
Yinan Fu
Xiamen University, Xiamen, China
Feng Tian
Institute of software, Chinese Academy of Sciences, Beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences, Beijing, China
Mingming Fan
Rochester Institute of Technology, Rochester, New York, United States
DOI

10.1145/3411764.3445537

論文URL

https://doi.org/10.1145/3411764.3445537

動画
HairTouch: Providing Stiffness, Roughness and Surface Height Differences Using Reconfigurable Brush Hairs on a VR Controller
要旨

Tactile feedback is widely used to enhance realism in virtual reality (VR). When touching virtual objects, stiffness and roughness are common and obvious factors perceived by the users. Furthermore, when touching a surface with complicated surface structure, differences from not only stiffness and roughness but also surface height are crucial. To integrate these factors, we propose a pin-based handheld device, HairTouch, to provide stiffness differences, roughness differences, surface height differences and their combinations. HairTouch consists of two pins for the two finger segments close to the index fingertip, respectively. By controlling brush hairs' length and bending direction to change the hairs' elasticity and hair tip direction, each pin renders various stiffness and roughness, respectively. By further independently controlling the hairs' configuration and pins' height, versatile stiffness, roughness and surface height differences are achieved. We conducted a perception study to realize users' distinguishability of stiffness and roughness on each of the segments. Based on the results, we performed a VR experience study to verify that the tactile feedback from HairTouch enhances VR realism.

著者
Chi-Jung Lee
National Taiwan University, Taipei, Taiwan
Hsin-Ruey Tsai
National Chengchi University, Taipei, Taiwan
Bing-Yu Chen
National Taiwan University, Taipei, Taiwan
DOI

10.1145/3411764.3445285

論文URL

https://doi.org/10.1145/3411764.3445285

動画
GuideBand: Intuitive 3D Multilevel Force Guidance on a Wristband in Virtual Reality
要旨

For haptic guidance, vibrotactile feedback is a commonly-used mechanism, but requires users to interpret its complicated patterns especially in 3D guidance, which is not intuitive and increases their mental effort. Furthermore, for haptic guidance in virtual reality (VR), not only guidance performance but also realism should be considered. Since vibrotactile feedback interferes with and reduces VR realism, it may not be proper for VR haptic guidance. Therefore, we propose a wearable device, GuideBand, to provide intuitive 3D multilevel force guidance upon the forearm, which reproduces an effect that the forearm is pulled and guided by a virtual guider or telepresent person in VR. GuideBand uses three motors to pull a wristband at different force levels in 3D space. Such feedback usually requires much larger and heavier robotic arms or exoskeletons. We conducted a just-noticeable difference study to understand users’ force level distinguishability. Based on the results, we performed a study to verify that compared with state-of-the-art vibrotactile guidance, GuideBand is more intuitive, needs a lower level of mental effort, and achieves similar guidance performance. We further conducted a VR experience study to observe how users combine and complement visual and force guidance, and prove that GuideBand enhances realism in VR guidance.

著者
Hsin-Ruey Tsai
National Chengchi University, Taipei, Taiwan
Yuan-Chia Chang
Kyoto University, Kyoto, Japan
Tzu-Yun Wei
National Taiwan University, Taipei, Taiwan
Chih-An Tsao
National Taiwan University, Taipei, Taiwan
Xander Koo
Pomona College, Claremont, California, United States
Hao-Chuan Wang
UC Davis, Davis, California, United States
Bing-Yu Chen
National Taiwan University, Taipei, Taiwan
DOI

10.1145/3411764.3445262

論文URL

https://doi.org/10.1145/3411764.3445262

動画