この勉強会は終了しました。ご参加ありがとうございました。
Augmented Reality (AR) and Virtual Reality (VR) devices are becoming easier to access and use, but the barrier to entry for creating AR/VR applications remains high. Although the recent spike in HCI research on novel AR/VR tools is promising, we lack insights into how AR/VR creators use today's state-of-the-art authoring tools as well as the types of challenges that they face. We interviewed 21 AR/VR creators, which we grouped into hobbyists, domain experts, and professional designers. Despite having a variety of motivations and skillsets, they described similar challenges in designing and building AR/VR applications. We synthesize 8 key barriers that AR/VR creators face nowadays, starting from prototyping the initial experiences to dealing with "the many unknowns" during implementation, to facing difficulties in testing applications. Based on our analysis, we discuss the importance of considering end-user developers as a growing population of AR/VR creators, how we can build learning opportunities into AR/VR tools, and the need for building AR/VR toolchains that integrate debugging and testing.
Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.
Auditory headsets capable of actively or passively intermixing both real and virtual sounds are in-part acoustically transparent. This paper explores the consequences of acoustic transparency, both on the perception of virtual audio content, given the presence of a real-world auditory backdrop, and more broadly in facilitating a wearable, personal, private, always-available soundspace. We experimentally compare passive acoustically transparent, and active noise cancelling, orientation-tracked auditory headsets across a range of content types, both indoors and outdoors for validity. Our results show differences in terms of presence, realness and externalization for select content types. Via interviews and a survey, we discuss attitudes toward acoustic transparency (e.g. being perceived as safer), the potential shifts in audio usage that might be precipitated by adoption, and reflect on how such headsets and experiences fit within the area of Mixed Reality.
The accessibility of tools to model artifacts is one of the core driving factors for the adoption of Personal Fabrication. Subsequently, model repositories like Thingiverse became important tools in (novice) makers' processes. They allow them to shorten or even omit the design process, offloading a majority of the effort to other parties. However, steps like measurement of surrounding constraints (e.g., clearance) which exist only inside the users' environment, can not be similarly outsourced. We propose Mix&Match a mixed-reality-based system which allows users to browse model repositories, preview the models in-situ, and adapt them to their environment in a simple and immediate fashion. Mix&Match aims to provide users with CSG operations which can be based on both virtual and real geometry. We present interaction patterns and scenarios for Mix&Match, arguing for the combination of mixed reality and model repositories. This enables almost modelling-free personal fabrication for both novices and expert makers.