Video, XR, Perception, & Visualization

[A] Paper Room 14, 2021-05-11 17:00:00~2021-05-11 19:00:00 / [B] Paper Room 14, 2021-05-12 01:00:00~2021-05-12 03:00:00 / [C] Paper Room 14, 2021-05-12 09:00:00~2021-05-12 11:00:00

会議の名前
CHI 2021
Little Road Driving HUD: Heads-Up Display Complexity Influences Drivers’ Perceptions of Automated Vehicles
要旨

Modern vehicles are using AI and increasingly sophisticated sensor suites to improve Advanced Driving Assistance Systems (ADAS) and support automated driving capabilities. Heads-Up-Displays (HUDs) provide an opportunity to visually inform drivers about vehicle perception and interpretation of the driving environment. One approach to HUD design may be to reveal to drivers the vehicle’s full contextual understanding, though it is not clear if the benefits of additional information outweigh the drawbacks of added complexity, or if this balance holds across drivers. We designed and tested an Augmented Reality (AR) HUD in an online study ($N=298$), focusing on the influence of HUD visualizations on drivers’ situation awareness and perceptions. Participants viewed two driving scenes with one of three HUD conditions. Results were nuanced: situation awareness declined with increasing driving context complexity, and contrary to expectation, also declined with the presence of a HUD compared to no HUD. Significant differences were found by varying HUD complexity, which led us to explore different characterizations of complexity, including counts of scene items, item categories, and illuminated pixels. Our analysis finds that driving style interacts with driving context and HUD complexity, warranting further study.

著者
Rebecca Currano
Stanford University, Stanford, California, United States
So Yeon Park
Stanford University, Stanford, California, United States
Dylan James. Moore
Stanford University, Stanford, California, United States
Kent Lyons
Toyota Research Institute, Los Altos, California, United States
David Sirkin
Stanford University, Stanford, California, United States
DOI

10.1145/3411764.3445575

論文URL

https://doi.org/10.1145/3411764.3445575

動画
From FOMO to JOMO: Examining the Fear and Joy of Missing Out and Presence in a 360° Video Viewing Experience
要旨

Cinematic Virtual Reality (CVR), or 360° video, engages users in immersive viewing experiences. However, as users watch one part of the 360° view, they will necessarily miss out on events happening in other parts of the sphere. Consequently, fear of missing out (FOMO) is unavoidable. However, users can also experience the joy of missing out (JOMO). In a repeated measures, mixed methods design, we examined the fear and joy of missing out (FOMO and JOMO) and sense of presence in two repeat viewings of a 360° film using a head-mounted display. We found that users experienced both FOMO and JOMO. FOMO was caused by the users' awareness of parallel events in the spherical view, but users also experienced JOMO. FOMO did not compromise viewers' sense of presence, and FOMO also decreased in the second viewing session, while JOMO remained constant. The findings suggest that FOMO and JOMO can be two integral qualities in an immersive video viewing experience and that FOMO may not be as negative a factor as previously thought.

著者
Tanja Aitamurto
University of Illinois at Chicago, Chicago, Illinois, United States
Andrea Stevenson Won
Cornell University, Ithaca, New York, United States
Sukolsak Sakshuwong
Stanford, Stanford, California, United States
Byungdoo Kim
Cornell University, Ithaca, New York, United States
Yasamin Sadeghi
University of California, Los Angeles , Los Angeles, California, United States
Krysten Stein
University of Illinois at Chicago, Chicago, Illinois, United States
Peter G. Royal
University of Illinois at Chicago, Chicago, Illinois, United States
Catherine Lynn. Kircos
Evidation Health, San Mateo, California, United States
DOI

10.1145/3411764.3445183

論文URL

https://doi.org/10.1145/3411764.3445183

動画
RCEA-360VR: Real-time, Continuous Emotion Annotation in 360° VR Videos for Collecting Precise Viewport-dependent Ground Truth Labels
要旨

Precise emotion ground truth labels for 360° virtual reality (VR) video watching are essential for fine-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, continuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360° VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users' workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels. Our work contributes usable and effective techniques for collecting fine-grained viewport-dependent emotion labels in 360° VR.

著者
Tong Xue
Beijing Institute of Technology, Beijing, China
Abdallah El Ali
Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands
Tianyi Zhang
Centrum Wiskunde & Informatica, Amsterdam, Netherlands
Gangyi Ding
Beijing Institute of Technology, Beijing, China
Pablo Cesar
CWI, Amsterdam, Netherlands
DOI

10.1145/3411764.3445487

論文URL

https://doi.org/10.1145/3411764.3445487

動画
Do You Really Need to Know Where 'That' Is? Enhancing Support for Referencing in Collaborative Mixed Reality Environments
要旨

Mixed Reality has been shown to enhance remote guidance and is especially well-suited for physical tasks. Conversations during these tasks are heavily anchored around task objects and their spatial relationships in the real world, making referencing - the ability to refer to an object in a way that is understood by others - a crucial process that warrants explicit support in collaborative Mixed Reality systems. This paper presents a 2x2 mixed factorial experiment that explores the effects of providing spatial information and system-generated guidance to task objects. It also investigates the effects of such guidance on the remote collaborator's need for spatial information. Our results show that guidance increases performance and communication efficiency while reducing the need for spatial information, especially in unfamiliar environments. Our results also demonstrate a reduced need for remote experts to be in immersive environments, making guidance more scalable, and expertise more accessible.

著者
Janet G. Johnson
UC San Diego, La Jolla, California, United States
Danilo Gasques
University of California San Diego, San Diego, California, United States
Tommy Sharkey
UC San Diego, La Jolla, California, United States
Evan Schmitz
University of Washington, Seattle, Washington, United States
Nadir Weibel
UC San Diego, La Jolla, California, United States
DOI

10.1145/3411764.3445246

論文URL

https://doi.org/10.1145/3411764.3445246

動画
The Image of the Interface: How People Use Landmarks to Develop Spatial Memory of Commands in Graphical Interfaces
要旨

Graphical User Interfaces present commands at particular locations, arranged in menus, toolbars, and ribbons. One hallmark of expertise with a GUI is that experts know the locations of commonly-used commands, such that they can find them quickly and without searching. Although GUIs have been studied for many years, however, there is still little known about how this spatial location memory develops, or how designers can make interfaces more memorable. One of the main ways that people remember locations in the real world is landmarks – so we carried out a study to investigate how users remember commands and navigate in four common applications (Word, Facebook, Reader, and Photoshop). Our study revealed that people strongly rely on landmarks that are readily available in the interface (e.g., layout, corners, and edges) to orient themselves and remember commands. We provide new evidence that landmarks can aid spatial memory and expertise development with an interface, and guidelines for designers to improve the memorability of future GUIs.

受賞
Honorable Mention
著者
Md. Sami Uddin
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Carl Gutwin
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
DOI

10.1145/3411764.3445050

論文URL

https://doi.org/10.1145/3411764.3445050

動画
Understanding the Design and Effectiveness of Peripheral Breathing Guide Use During Information Work
要旨

Peripheral breathing guides – tools designed to influence breathing while completing another primary task – have been proposed to provide physiological benefits during information work. While research has shown that guides can influence breathing rates under ideal conditions, there is little evidence that they can lead to underlying markers of physiological benefit under interrupted work conditions. Further, even if guides are effective during work tasks, it is unclear how personal and workplace factors affect peoples' willingness to adopt them for everyday use. In this paper, we present the results of a comparative, mixed-methods study of five different peripheral breathing guides. Our findings show that peripheral breathing guides are viable and can provide physiological markers of benefit during interrupted work. Further, we show that guides are effective – even when use is intermittent due to workplace distractions. Finally, we contribute guidelines to support the design of breathing guides for everyday information work.

著者
Aaron Tabor
University of New Brunswick, Fredericton, New Brunswick, Canada
Scott Bateman
University of New Brunswick, Fredericton, New Brunswick, Canada
Erik J. Scheme
University of New Brunswick, Fredericton, New Brunswick, Canada
Book Sadprasid
University of New Brunswick, Fredericton, New Brunswick, Canada
m.c. schraefel
University of Southampton, Southampton, United Kingdom
DOI

10.1145/3411764.3445388

論文URL

https://doi.org/10.1145/3411764.3445388

動画
Understanding User Identification in Virtual Reality through Behavioral Biometrics and the Effect of Body Normalization
要旨

Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N=16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90 % is possible across sessions recorded on different days. Moreover, we investigate the role of users' physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38 %; hence, it improves the performance of identification systems.

著者
Jonathan Liebers
University of Duisburg-Essen, Essen, Germany
Uwe Gruenefeld
University of Duisburg-Essen, Essen, Germany
Lukas Mecke
Bundeswehr University Munich, Munich, Germany
Alia Saad
University of Duisburg-Essen, Essen, Germany
Jonas Auda
University of Duisburg-Essen, Essen, North Rhine-Westphalia, Germany
Florian Alt
Bundeswehr University Munich, Munich, Germany
Stefan Schneegass
University of Duisburg-Essen, Essen, Germany
Mark Abdelaziz
German University in Cairo, Cairo, Egypt
DOI

10.1145/3411764.3445528

論文URL

https://doi.org/10.1145/3411764.3445528

動画
From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual Data
要旨

Audiovisual recordings of user studies and interviews provide important data in qualitative HCI research. Even when a textual transcription is available, researchers frequently turn to these recordings due to their rich information content. However, the temporal, unstructured nature of audiovisual recordings makes them less efficient to work with than text. Through interviews and a survey, we explored how HCI researchers work with audiovisual recordings. We investigated researchers' transcription and annotation practice, their overall analysis workflow, and the prevalence of direct analysis of audiovisual recordings. We found that a key task was locating and analyzing inspectables, interesting segments in recordings. Since locating inspectables can be time consuming, participants look for detectables, visual or auditory cues that indicate the presence of an inspectable. Based on our findings, we discuss the potential for automation in locating detectables in qualitative audiovisual analysis.

受賞
Honorable Mention
著者
Krishna Subramanian
RWTH Aachen University, Aachen, Germany
Johannes Maas
RWTH Aachen University, Aachen, Germany
Jan Borchers
RWTH Aachen University, Aachen, Germany
James Hollan
UC San Diego, La Jolla, California, United States
DOI

10.1145/3411764.3445458

論文URL

https://doi.org/10.1145/3411764.3445458

動画
Physiological and Perceptual Responses to Athletic Avatars while Cycling in Virtual Reality
要旨

Avatars in virtual reality (VR) enable embodied experiences and induce the Proteus effect - a shift in behavior and attitude to mimic one's digital representation. Previous work found that avatars associated with physical strength can decrease users' perceived exertion when performing physical tasks. However, it is unknown if an avatar's appearance can also influence the user's physiological response to exercises. Therefore, we conducted an experiment with 24 participants to investigate the effect of avatars' athleticism on heart rate and perceived exertion while cycling in VR following a standardized protocol. We found that the avatars' athleticism has a significant and systematic effect on users' heart rate and perceived exertion. We discuss potential moderators such as body ownership and users' level of fitness. Our work contributes to the emerging area of VR exercise systems.

著者
Martin Kocur
University of Regensburg, Regensburg, Germany
Florian Habler
University of Regensburg, Regensburg, Germany
Valentin Schwind
Frankfurt University of Applied Sciences, Frankfurt, Germany
Paweł W. Woźniak
Utrecht University, Utrecht, Netherlands
Christian Wolff
University of Regensburg, Regensburg, Bavaria, Germany
Niels Henze
University of Regensburg, Regensburg, Germany
DOI

10.1145/3411764.3445160

論文URL

https://doi.org/10.1145/3411764.3445160

動画
“Put it on the Top, I’ll Read it Later”: Investigating Users’ Desired Display Order for Smartphone Notifications
要旨

Smartphone users do not deal with notifications strictly in the order they are displayed, but sometimes read them from the middle, suggesting a mismatch between current systems’ display order and users’ needs. We therefore used mixed methods to investigate 34 smartphone users’ desired notification display order and related it with users’ self-reported order of attendance. Classifying using these two orders as dimensions, we obtained seven types of notifications, which helped us not only highlight the distinct attributes but understand the implied roles of these seven types of notifications, as well as the implied meaning of display orders. This is especially manifested in our identification of three main mismatches between the two orders. Qualitative findings reveal several meanings that participants attached to particular positions when arranging notifications. We offer design implications for notification systems, including calling for two-dimensional notification layout to support the multi-purpose roles of smartphone notifications we identified.

著者
Tzu-Chieh Lin
National Chiao Tung University, Hsinchu, Taiwan
Yu-Shao Su
National Chiao Tung University, Hsinchu , Taiwan
Emily Helen. Yang
National Chiao Tung University, Hsinchu, Taiwan
Yun Han Chen
National Chiao Tung University, Hsinchu, Taiwan
Hao-Ping Lee
National Chiao Tung University, Hsinchu, Taiwan
Yung-Ju Chang
National Chiao Tung University, Hsinchu, Taiwan
DOI

10.1145/3411764.3445384

論文URL

https://doi.org/10.1145/3411764.3445384

動画
SoniBand: Understanding the Effects of Metaphorical Movement Sonifications on Body Perception and Physical Activity
要旨

Negative body perceptions are a major predictor of physical inactivity, a serious health concern. Sensory feedback can be used to alter such body perception; movement sonification, in particular, has been suggested to affect body perception and levels of physical activity (PA) in inactive people. We investigated how metaphorical sounds impact body perception and PA. We report two qualitative studies centered on performing different strengthening/flexibility exercises using SoniBand, a wearable that augments movement through different sounds. The first study involved physically active participants and served to obtain a nuanced understanding of the sonifications’ impact. The second, in the home of physically inactive participants, served to identify which effects could support PA adherence. Our findings show that movement sonification based on metaphors led to changes in body perception (e.g., feeling strong) and PA (e.g., repetitions) in both populations, but effects could differ according to the existing PA-level. We discuss principles for metaphor-based sonification design to foster PA.

著者
Judith Ley-Flores
Universidad Carlos III de Madrid, Leganes, Madrid, Spain
Laia Turmo Vidal
Uppsala University, Uppsala, Sweden
Nadia Berthouze
University College London, London, United Kingdom
Aneesha Singh
University College London, London, United Kingdom
Frederic Bevilacqua
STMS IRCAM-CNRS-Sorbonne Université, Paris, France
Ana Tajadura-Jiménez
Universidad Carlos III de Madrid / University College London, Madrid / London, Spain
DOI

10.1145/3411764.3445558

論文URL

https://doi.org/10.1145/3411764.3445558

動画
Investigating the Impact of Real-World Environments on the Perception of 2D Visualizations in Augmented Reality
要旨

In this work we report on two comprehensive user studies investigating the perception of Augmented Reality (AR) visualizations influenced by real-world backgrounds. Since AR is an emerging technology, it is important to also consider productive use cases, which is why we chose an exemplary and challenging industry 4.0 environment. Our basic perceptual research focuses on both the visual complexity of backgrounds as well as the influence of a secondary task. In contrast to our expectation, data of our 34 study participants indicate that the background has far less influence on the perception of AR visualizations. Moreover, we observed a mismatch between measured and subjectively reported performance. We discuss the importance of the background and recommendations for visual real-world augmentations. Overall, our results suggest that AR can be used in many visually challenging environments without losing the ability to productively work with the visualizations shown.

著者
Marc Satkowski
Technische Universität Dresden, Dresden, Germany
Raimund Dachselt
Technische Universität Dresden, Dresden, Germany
DOI

10.1145/3411764.3445330

論文URL

https://doi.org/10.1145/3411764.3445330

動画