この勉強会は終了しました。ご参加ありがとうございました。
Personal fabrication is made more accessible through repositories like Thingiverse, as they replace modeling with retrieval. However, they require users to translate spatial requirements to keywords, which paints an incomplete picture of physical artifacts: proportions or morphology are non-trivially encoded through text only. We explore a vision of in-situ spatial search for (future) physical artifacts, and present ShapeFindAR, a mixed-reality tool to search for 3D models using in-situ sketches blended with textual queries. With ShapeFindAR, users search for geometry, and not necessarily precise labels, while coupling the search process to the physical environment (e.g., by sketching in-situ, extracting search terms from objects present, or tracing them). We developed ShapeFindAR for HoloLens 2, connected to a database of 3D-printable artifacts. We specify in-situ spatial search, describe its advantages, and present walkthroughs using ShapeFindAR, which highlight novel ways for users to articulate their wishes, without requiring complex modeling tools or profound domain knowledge.
In this paper, we investigate the use of shape-change for interaction with sound zones. A core challenge to designing interaction with sound zone systems is to support users' understanding of the unique spatial properties of sound zones. Shape-changing interfaces present new opportunities for addressing this. We present a structured investigation into this. We leveraged the knowledge of 12 sound experts to define a set of basic shapes and movements. Then, we constructed a prototype and conducted an elicitation study with 17 novice users, investigating the experience of these shapes and movements. Our findings show that physical visualizations of sound zones can be useful in supporting users' experience of sound zones. We present a framework of 4 basic pattern categories that prompt different sound zone experiences and outline further research directions for shape-change in supporting sound zone interaction.
Sound zone technology allows multiple simultaneous sound experiences for multiple people in the same room without interference. However, given the inherent invisible and intangible nature of sound zones, it is unclear how to communicate the position and size of sound zones to users. This paper compares two visualisation techniques; absolute visualisation, relational visualisation, as well as a baseline condition without visualisations. In a within-subject experiment (N=33), we evaluated these techniques for effectiveness and efficiency across four representative tasks. Our findings show that the absolute and relational visualisation techniques increase effectiveness in multi-user tasks but not in single-user tasks. The efficiency for all tasks was improved using visualisations. We discuss the potential of visualisations for sound zones and highlight future research opportunities for sound zone interaction.
Social Media (SM) has shown that we adapt our communication and disclosure behaviors to available technological opportunities. Head-mounted Augmented Reality (AR) will soon allow to effortlessly display the information we disclosed not isolated from our physical presence (e.g., on a smartphone) but visually attached to the human body. In this work, we explore how the medium (AR vs. Smartphone), our role (being augmented vs. augmenting), and characteristics of information types (e.g., level of intimacy, self-disclosed vs. non-self-disclosed) impact the users' comfort when displaying personal information. Conducting an online survey (N=148), we found that AR technology and being augmented negatively impacted this comfort. Additionally, we report that AR mitigated the effects of information characteristics compared to those they had on smartphones. In light of our results, we discuss that information augmentation should be built on consent and openness, focusing more on the comfort of the augmented rather than the technological possibilities.