Augmented or Diminished Reality?

会議の名前
CHI 2022
Towards Understanding Diminished Reality
要旨

Diminished reality (DR) refers to the concept of removing content from a user's visual environment. While its implementation is becoming feasible, it is still unclear how users perceive and interact in DR-enabled environments and what applications it benefits. To address this challenge, we first conduct a formative study to compare user perceptions of DR and mediated reality effects (e.g., changing the color or size of target elements) in four example scenarios. Participants preferred removing objects through opacity reduction (i.e., the standard DR implementation) and appreciated mechanisms for maintaining a contextual understanding of diminished items (e.g., outlining). In a second study, we explore the user experience of performing tasks within DR-enabled environments. Participants selected which objects to diminish and the magnitude of the effects when performing two separate tasks (video viewing, assembly). Participants were comfortable with decreased contextual understanding, particularly for less mobile tasks. Based on the results, we define guidelines for creating general DR-enabled environments.

著者
Yi Fei Cheng
Swarthmore College, Swarthmore, Pennsylvania, United States
Hang Yin
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yukang Yan
Tsinghua University, Beijing, China
Jan Gugenheimer
Institut Polytechnique de Paris, Paris, France
David Lindlbauer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517452

動画
Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented Reality
要旨

Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.

著者
Feiyu Lu
Virginia Tech, Blacksburg, Virginia, United States
Yan Xu
Facebook, Redmond, Washington, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517723

動画
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions
要旨

Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.

受賞
Honorable Mention
著者
Nuwan Nanayakkarawasam Peru Kandage. Janaka
National University of Singapore, Singapore, Singapore
Chloe Haigh
National University of Singapore, Singapore, Singapore
Hyeongcheol Kim
National University of Singapore, Singapore , Singapore
Shan Zhang
National University of Singapore, Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502127

動画
Dually Noted: Layout-Aware Annotations with Smartphone Augmented Reality
要旨

Sharing annotations encourages feedback, discussion, and knowledge passing among readers and can be beneficial for personal and public use. Prior augmented reality (AR) systems have expanded these benefits to both digital and printed documents. However, despite smartphone AR now being widely available, there is a lack of research about how to use AR effectively for interactive document annotation. We propose Dually Noted, a smartphone-based AR annotation system that recognizes the layout of structural elements in a printed document for real-time authoring and viewing of annotations. We conducted experience prototyping with eight users to elicit potential benefits and challenges within smartphone AR, and this informed the resulting Dually Noted system and annotation interactions with the document elements. AR annotation is often unwieldy, but during a 12-user empirical study our novel structural understanding component allows Dually Noted to improve precise highlighting and annotation interaction accuracy by 13%, increase interaction speed by 42%, and significantly lower cognitive load over a baseline method without document layout understanding. Qualitatively, participants commented that Dually Noted was a swift and portable annotation experience. Overall, our research provides new methods and insights for how to improve AR annotations for physical documents.

著者
Jing Qian
Brown University, Providence, Rhode Island, United States
Qi Sun
New York University, New York, New York, United States
Curtis Wigington
Adobe Research, San Jose, California, United States
Han L.. Han
Université Paris-Saclay, CNRS, Inria, Orsay, France
Tong Sun
Adobe Research, San Jose, California, United States
Jennifer Healey
Adobe Research, San Jose, California, United States
James Tompkin
Brown University, Providence, Rhode Island, United States
Jeff Huang
Brown University, Providence, Rhode Island, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502026

動画
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
要旨

This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics.

著者
Ryo Suzuki
University of Calgary, Calgary, Alberta, Canada
Adnan Karim
University of Calgary, Calgary, Alberta, Canada
Tian Xia
University of Calgary, Calgary, Alberta, Canada
Hooman Hedayati
University of Colorado Boulder, Boulder, Colorado, United States
Nicolai Marquardt
University College London, London, United Kingdom
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517719

動画