With the increasing spread of AR head-mounted displays suitable for everyday use, interaction with information becomes ubiquitous, even while walking. However, this requires constant shifts of our attention between walking and interacting with virtual information to fulfill both tasks adequately. Accordingly, we as a community need a thorough understanding of the mutual influences of walking and interacting with digital information to design safe yet effective interactions. Thus, we systematically investigate the effects of different AR anchors (hand, head, torso) and task difficulties on user experience and performance. We engage participants (n=26) in a dual-task paradigm involving a visual working memory task while walking. We assess the impact of dual-tasking on both virtual and walking performance, and subjective evaluations of mental and physical load. Our results show that head-anchored AR content least affected walking while allowing for fast and accurate virtual task interaction, while hand-anchored content increased reaction times and workload.
Augmented Reality (AR) promises to enhance daily office activities involving numerous textual documents, slides, and spreadsheets by expanding workspaces and enabling more direct interaction. However, there is a lack of systematic understanding of how knowledge workers can manage multiple documents and organize, explore, and compare them in AR environments. Therefore, we conducted a user-centered design study (N = 21) using predefined spatial document layouts in AR to elicit interaction techniques, resulting in 790 observation notes. Thematic analysis identified various interaction methods for aggregating, distributing, transforming, inspecting, and navigating document collections. Based on these findings, we propose a design space and distill design implications for AR document arrangement systems, such as enabling body-anchored storage, facilitating layout spreading and compressing, and designing interactions for layout transformation. To demonstrate their usage, we developed a rapid prototyping system and exemplify three envisioned scenarios. With this, we aim to inspire the design of future immersive offices.
Text entry for extended reality (XR) is far from perfect, and a variety of text entry techniques (TETs) have been proposed to fit various contexts of use. However, comparing between TETs remains challenging due to the lack of a consolidated collection of techniques, and limited understanding of how interaction attributes of a technique (e.g., presence of visual feedback) impact user performance. To address these gaps, this paper examines the current landscape of XR TETs by creating a database of 176 different techniques. We analyze this database to highlight trends in the design of these techniques, the metrics used to evaluate them, and how various interaction attributes impact these metrics. We discuss implications for future techniques and present TEXT: Text Entry for XR Trove, an interactive online tool to navigate our database.
Virtual reality (VR) allows to embody avatars. Coined the Proteus effect, an avatar's visual appearance can influence users' behavior and perception. Recent work suggests that athletic avatars decrease perceptual and physiological responses during VR exercise. However, such effects can fail to occur when users do not experience avatar ownership and identification. While customized avatars increase body ownership and identification, it is unclear whether they improve the Proteus effect. We conducted a study with 24 participants to determine the effects of athletic and non-athletic avatars that were either customized or randomly assigned. We developed a customization editor to allow creating customized avatars. We found that customized avatars reduced perceived exertion. We also found that athletic avatars decreased heart rate while holding weights, however, only when being customized. Results indicate that customized avatars can positively influence users during physical exertion. We discuss the utilization of avatar customization in VR exercise systems.
This paper examines the levels of exclusion encountered by disabled and older users of consumer-level VR and AR technology and identifies methods formed by people with diverse access needs to circumvent encountered barriers to use. First, we estimate exclusion rates for a selection of nine immersive experiences of VR and AR, computed using population statistics data for the United Kingdom (UK). We then present an empirical lab-based study evaluating the usability of the same VR and AR experiences. The study involved 60 UK-based participants with varying access needs and the study results were used to calculate the empirical exclusion rates. Both the estimated and empirical exclusion rates display high levels of exclusion, which for the more complex experiences in the study reached 100%. However, multiple participants overcame usability barriers and completed experiences through provided assistance and self-initiated adaptations, suggesting that future VR and AR can become more inclusive if designed to counter these barriers.
Automated Urban Air Mobility (UAM) can improve passenger transportation and reduce congestion, but its success depends on passenger trust. While initial research addresses passengers' information needs, questions remain about how to simulate air taxi flights and how these simulations impact users and interface requirements.
We conducted a between-subjects study (N=40), examining the influence of motion fidelity in Virtual-Reality-simulated air taxi flights on user effects and interface design. Our study compared simulations with and without motion cues using a 3-Degrees-of-Freedom motion chair. Optimizing the interface design across six objectives, such as trust and mental demand, we used multi-objective Bayesian optimization to determine the most effective design trade-offs.
Our results indicate that motion fidelity decreases users' trust, understanding, and acceptance, highlighting the need to consider motion fidelity in future UAM studies to approach realism. However, minimal evidence was found for differences or equality in the optimized interface designs, suggesting personalized interface designs.
Mid-air text entry in mixed reality (MR) headsets has shown promise but remains less efficient than traditional input methods. While research has focused on improving typing performance, the mechanics of mid-air gesture typing, especially eye-hand coordination, are less understood. This paper investigates visuomotor coordination of mid-air gesture keyboards through a user study (n=16) comparing gesture typing on a tablet and in mid-air. Through an expert task we demonstrate that users were able to achieve a comparable text input performance. Our in-depth analysis of eye-hand coordination reveals significant differences in the eye-hand coordination patterns between gesture typing on a tablet and in-air. The mid-air gesture typing necessitates almost all of the visual attention on the keyboard area and a more consistent synchronization in eye-hand coordination to compensate for the increased motor and cognitive demands without physical boundaries. These insights provide important implications for the design of more efficient text input methods.