Modern vehicles are using AI and increasingly sophisticated sensor suites to improve Advanced Driving Assistance Systems (ADAS) and support automated driving capabilities. Heads-Up-Displays (HUDs) provide an opportunity to visually inform drivers about vehicle perception and interpretation of the driving environment. One approach to HUD design may be to reveal to drivers the vehicle’s full contextual understanding, though it is not clear if the benefits of additional information outweigh the drawbacks of added complexity, or if this balance holds across drivers. We designed and tested an Augmented Reality (AR) HUD in an online study ($N=298$), focusing on the influence of HUD visualizations on drivers’ situation awareness and perceptions. Participants viewed two driving scenes with one of three HUD conditions. Results were nuanced: situation awareness declined with increasing driving context complexity, and contrary to expectation, also declined with the presence of a HUD compared to no HUD. Significant differences were found by varying HUD complexity, which led us to explore different characterizations of complexity, including counts of scene items, item categories, and illuminated pixels. Our analysis finds that driving style interacts with driving context and HUD complexity, warranting further study.
https://doi.org/10.1145/3411764.3445575
Cinematic Virtual Reality (CVR), or 360° video, engages users in immersive viewing experiences. However, as users watch one part of the 360° view, they will necessarily miss out on events happening in other parts of the sphere. Consequently, fear of missing out (FOMO) is unavoidable. However, users can also experience the joy of missing out (JOMO). In a repeated measures, mixed methods design, we examined the fear and joy of missing out (FOMO and JOMO) and sense of presence in two repeat viewings of a 360° film using a head-mounted display. We found that users experienced both FOMO and JOMO. FOMO was caused by the users' awareness of parallel events in the spherical view, but users also experienced JOMO. FOMO did not compromise viewers' sense of presence, and FOMO also decreased in the second viewing session, while JOMO remained constant. The findings suggest that FOMO and JOMO can be two integral qualities in an immersive video viewing experience and that FOMO may not be as negative a factor as previously thought.
https://doi.org/10.1145/3411764.3445183
Precise emotion ground truth labels for 360° virtual reality (VR) video watching are essential for fine-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, continuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360° VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users' workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels. Our work contributes usable and effective techniques for collecting fine-grained viewport-dependent emotion labels in 360° VR.
https://doi.org/10.1145/3411764.3445487
Mixed Reality has been shown to enhance remote guidance and is especially well-suited for physical tasks. Conversations during these tasks are heavily anchored around task objects and their spatial relationships in the real world, making referencing - the ability to refer to an object in a way that is understood by others - a crucial process that warrants explicit support in collaborative Mixed Reality systems. This paper presents a 2x2 mixed factorial experiment that explores the effects of providing spatial information and system-generated guidance to task objects. It also investigates the effects of such guidance on the remote collaborator's need for spatial information. Our results show that guidance increases performance and communication efficiency while reducing the need for spatial information, especially in unfamiliar environments. Our results also demonstrate a reduced need for remote experts to be in immersive environments, making guidance more scalable, and expertise more accessible.
https://doi.org/10.1145/3411764.3445246
Graphical User Interfaces present commands at particular locations, arranged in menus, toolbars, and ribbons. One hallmark of expertise with a GUI is that experts know the locations of commonly-used commands, such that they can find them quickly and without searching. Although GUIs have been studied for many years, however, there is still little known about how this spatial location memory develops, or how designers can make interfaces more memorable. One of the main ways that people remember locations in the real world is landmarks – so we carried out a study to investigate how users remember commands and navigate in four common applications (Word, Facebook, Reader, and Photoshop). Our study revealed that people strongly rely on landmarks that are readily available in the interface (e.g., layout, corners, and edges) to orient themselves and remember commands. We provide new evidence that landmarks can aid spatial memory and expertise development with an interface, and guidelines for designers to improve the memorability of future GUIs.
https://doi.org/10.1145/3411764.3445050
Peripheral breathing guides – tools designed to influence breathing while completing another primary task – have been proposed to provide physiological benefits during information work. While research has shown that guides can influence breathing rates under ideal conditions, there is little evidence that they can lead to underlying markers of physiological benefit under interrupted work conditions. Further, even if guides are effective during work tasks, it is unclear how personal and workplace factors affect peoples' willingness to adopt them for everyday use. In this paper, we present the results of a comparative, mixed-methods study of five different peripheral breathing guides. Our findings show that peripheral breathing guides are viable and can provide physiological markers of benefit during interrupted work. Further, we show that guides are effective – even when use is intermittent due to workplace distractions. Finally, we contribute guidelines to support the design of breathing guides for everyday information work.
https://doi.org/10.1145/3411764.3445388
Virtual Reality (VR) is becoming increasingly popular both in the entertainment and professional domains. Behavioral biometrics have recently been investigated as a means to continuously and implicitly identify users in VR. Applications in VR can specifically benefit from this, for example, to adapt virtual environments and user interfaces as well as to authenticate users. In this work, we conduct a lab study (N=16) to explore how accurately users can be identified during two task-driven scenarios based on their spatial movement. We show that an identification accuracy of up to 90 % is possible across sessions recorded on different days. Moreover, we investigate the role of users' physiology in behavioral biometrics by virtually altering and normalizing their body proportions. We find that body normalization in general increases the identification rate, in some cases by up to 38 %; hence, it improves the performance of identification systems.
https://doi.org/10.1145/3411764.3445528
Audiovisual recordings of user studies and interviews provide important data in qualitative HCI research. Even when a textual transcription is available, researchers frequently turn to these recordings due to their rich information content. However, the temporal, unstructured nature of audiovisual recordings makes them less efficient to work with than text. Through interviews and a survey, we explored how HCI researchers work with audiovisual recordings. We investigated researchers' transcription and annotation practice, their overall analysis workflow, and the prevalence of direct analysis of audiovisual recordings. We found that a key task was locating and analyzing inspectables, interesting segments in recordings. Since locating inspectables can be time consuming, participants look for detectables, visual or auditory cues that indicate the presence of an inspectable. Based on our findings, we discuss the potential for automation in locating detectables in qualitative audiovisual analysis.
https://doi.org/10.1145/3411764.3445458
Avatars in virtual reality (VR) enable embodied experiences and induce the Proteus effect - a shift in behavior and attitude to mimic one's digital representation. Previous work found that avatars associated with physical strength can decrease users' perceived exertion when performing physical tasks. However, it is unknown if an avatar's appearance can also influence the user's physiological response to exercises. Therefore, we conducted an experiment with 24 participants to investigate the effect of avatars' athleticism on heart rate and perceived exertion while cycling in VR following a standardized protocol. We found that the avatars' athleticism has a significant and systematic effect on users' heart rate and perceived exertion. We discuss potential moderators such as body ownership and users' level of fitness. Our work contributes to the emerging area of VR exercise systems.
https://doi.org/10.1145/3411764.3445160
Smartphone users do not deal with notifications strictly in the order they are displayed, but sometimes read them from the middle, suggesting a mismatch between current systems’ display order and users’ needs. We therefore used mixed methods to investigate 34 smartphone users’ desired notification display order and related it with users’ self-reported order of attendance. Classifying using these two orders as dimensions, we obtained seven types of notifications, which helped us not only highlight the distinct attributes but understand the implied roles of these seven types of notifications, as well as the implied meaning of display orders. This is especially manifested in our identification of three main mismatches between the two orders. Qualitative findings reveal several meanings that participants attached to particular positions when arranging notifications. We offer design implications for notification systems, including calling for two-dimensional notification layout to support the multi-purpose roles of smartphone notifications we identified.
https://doi.org/10.1145/3411764.3445384
Negative body perceptions are a major predictor of physical inactivity, a serious health concern. Sensory feedback can be used to alter such body perception; movement sonification, in particular, has been suggested to affect body perception and levels of physical activity (PA) in inactive people. We investigated how metaphorical sounds impact body perception and PA. We report two qualitative studies centered on performing different strengthening/flexibility exercises using SoniBand, a wearable that augments movement through different sounds. The first study involved physically active participants and served to obtain a nuanced understanding of the sonifications’ impact. The second, in the home of physically inactive participants, served to identify which effects could support PA adherence. Our findings show that movement sonification based on metaphors led to changes in body perception (e.g., feeling strong) and PA (e.g., repetitions) in both populations, but effects could differ according to the existing PA-level. We discuss principles for metaphor-based sonification design to foster PA.
https://doi.org/10.1145/3411764.3445558
In this work we report on two comprehensive user studies investigating the perception of Augmented Reality (AR) visualizations influenced by real-world backgrounds. Since AR is an emerging technology, it is important to also consider productive use cases, which is why we chose an exemplary and challenging industry 4.0 environment. Our basic perceptual research focuses on both the visual complexity of backgrounds as well as the influence of a secondary task. In contrast to our expectation, data of our 34 study participants indicate that the background has far less influence on the perception of AR visualizations. Moreover, we observed a mismatch between measured and subjectively reported performance. We discuss the importance of the background and recommendations for visual real-world augmentations. Overall, our results suggest that AR can be used in many visually challenging environments without losing the ability to productively work with the visualizations shown.
https://doi.org/10.1145/3411764.3445330