The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment
Rebecca Kleinberger (Northeastern University, Boston, Massachusetts, United States)Jennifer Cunha (Parrot Kindergarten, Jupiter, Florida, United States)Megha M. Vemuri (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Over 20 million parrots are kept as pets in the US, often lacking appropriate stimuli to meet their high social, cognitive, and emotional needs. After reviewing bird perception and agency literature, we developed an approach to allow parrots to engage in video-calling other parrots. Following a pilot experiment and expert survey, we ran a three-month study with 18 pet birds to evaluate the potential value and usability of a parrot-parrot video-calling system. We assessed the system in terms of perception, agency, engagement, and overall perceived benefits. With 147 bird-triggered calls, our results show that 1) every bird used the system, 2) most birds exhibited high motivation and intentionality, and 3) all caretakers reported perceived benefits, some arguably life-transformative, such as learning to forage or even to fly by watching others. We report on individual insights and propose considerations regarding ethics and the potential of parrot video-calling for enrichment.
Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input
Yuran Ding (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Craig Shultz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Surface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm~1mm), meso-scale (1mm~200μm), and micro-scale (<200μm). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale", where the simplicity and easy implementation of the technique allows it to blend with objects around us. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity.
Horse as Teacher: How human-horse interaction informs human-robot interaction
Eakta Jain (University of Florida, Gainesville, Florida, United States)Christina Gardner-McCune (University of Florida , Gainesville, Florida, United States)
Robots are entering our lives and workplaces as companions and teammates. Though much research has been done on how to interact with robots, teach robots and improve task performance, an open frontier for HCI/HRI research is how to establish a working relationship with a robot in the first place. Studies that explore the early stages of human-robot interaction are an emerging area of research. Simultaneously, there is resurging interest in how human-animal interaction could inform human-robot interaction. We present a first examination of early stage human-horse interaction through the lens of human-robot interaction, thus connecting these two areas. Following Strauss’ approach, we conduct a thematic analysis of data from three sources gathered over a year of field work: observations, interviews and journal entries. We contribute design guidelines based on our analyses and findings.
Affective Profile Pictures: Exploring the Effects of Changing Facial Expressions in Profile Pictures on Text-Based Communication
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Shigeo Yoshida (The University of Tokyo, Tokyo, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)Takuji Narumi (the University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)
When receiving text messages from unacquainted colleagues in fully remote workplaces, insufficient mutual understanding and limited social cues can lead people to misinterpret the tone of the message and further influence their impression of remote colleagues. Emojis have been commonly used for supporting expressive communication; however, people seldom use emojis before they become acquainted with each other. Hence, we explored how changing facial expressions in profile pictures could be an alternative channel to communicate socio-emotional cues. By conducting an online controlled experiment with 186 participants, we established that changing facial expressions of profile pictures can influence the impression of the message receivers toward the sender and the message valence when receiving neutral messages. Furthermore, presenting incongruent profile pictures to positive messages negatively affected the interpretation of the message valence, but did not have much effect on negative messages. We discuss the implications of affective profile pictures in supporting text-based communication.
Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory
Francesco Chiossi (LMU Munich, Munich, Germany)Luke Haliburton (LMU Munich, Munich, Germany)Changkun Ou (LMU Munich, Munich, Germany)Andreas Martin. Butz (LMU Munich, Munich, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study
Perttu Hämäläinen (Aalto University, Espoo, Finland)Mikke Tavast (Aalto University, Espoo, Finland)Anton Kunnari (University of Helsinki, Helsinki, Finland)
Collecting data is one of the bottlenecks of Human-Computer Interaction (HCI) research. Motivated by this, we explore the potential of large language models (LLMs) in generating synthetic user research data. We use OpenAI’s GPT-3 model to generate open-ended questionnaire responses about experiencing video games as art, a topic not tractable with traditional computational user models. We test whether synthetic responses can be distinguished from real responses, analyze errors of synthetic data, and investigate content similarities between synthetic and real data. We conclude that GPT-3 can, in this context, yield believable accounts of HCI experiences. Given the low cost and high speed of LLM data generation, synthetic data should be useful in ideating and piloting new experiments, although any findings must obviously always be validated with real data. The results also raise concerns: if employed by malicious users of crowdsourcing services, LLMs may make crowdsourcing of self-report data fundamentally unreliable.
Reality Rifts: Wonder-ful Interfaces by Disrupting Perceptual Causality
Lung-Pan Cheng (National Taiwan University, Taipei, Taiwan)Yi Chen (National Taiwan University, Taipei, Taiwan)Yi-Hao Peng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Christian Holz (ETH Zürich, Zurich, Switzerland)
Reality Rifts are interfaces between the physical and the virtual reality, where incoherent observations of physical behavior lead users to imagine comprehensive and plausible end-to-end dynamics. Reality Rifts emerge in interactive physical systems that lack one or more components that are central to their operation, yet where the physical end-to-end interaction persists with plausible outcomes. Even in the presence of a Reality Rift, users can still interact with a system—much like they would with the unaltered and complete counterpart—leading them to implicitly infer the existence and imagine the behavior of the lacking components from observable phenomena and outcomes. Therefore, dynamic systems with Reality Rifts trigger doubt, curiosity, and rumination—a sense of wonder that users experience when observing a Reality Rift due to their innate curiosity. In this paper, we explore how interactive systems can elicit and guide the user's imagination by integrating Reality Rifts. We outline the design process for opening a Reality Rift in interactive physical systems, describe the resulting design space, and explore it through six characteristic prototypes. To understand to what extent and with which qualities these prototypes indeed induce a sense of wonder during an interaction, we evaluated \projectName\ in the form of a field deployment with 50 participants. We discuss participants' behavior and derive factors for the implementation of future wonder-ful experiences.
CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context
Joseph Chee Chang (Allen Institute for AI, Seattle, Washington, United States)Amy X.. Zhang (University of Washington, Seattle, Washington, United States)Jonathan Bragg (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Andrew Head (University of Pennsylvania, Philadelphia, Pennsylvania, United States)Kyle Lo (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Doug Downey (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)Daniel S. Weld (Allen Institute for Artificial Intelligence, Seattle, Washington, United States)
When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of the hundreds of citations encountered during literature reviews. This paper introduces CiteSee, a paper reading tool that leverages a user's publishing, reading, and saving activities to provide personalized visual augmentations and context around citations. First, CiteSee connects the current paper to familiar contexts by surfacing known citations a user had cited or opened. Second, CiteSee helps users prioritize their exploration by highlighting relevant but unknown citations based on saving and reading history. We conducted a lab study that suggests CiteSee is significantly more effective for paper discovery than three baselines. A field deployment study shows CiteSee helps participants keep track of their explorations and leads to better situational awareness and increased paper discovery via inline citation when conducting real-world literature reviews.
Predicting Gaze-based Target Selection in Augmented Reality Headsets based on Eye and Head Endpoint Distributions
Yushi Wei (Xi'an Jiaotong-Liverpool University, Suzhou, China)Rongkai Shi (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Difeng Yu (University of Melbourne, Melbourne, Victoria, Australia)Yihong Wang (Xi'an Jiaotong-Liverpool University, Suzhou, China)Yue Li (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Lingyun Yu (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)Hai-Ning Liang (Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China)
Target selection is a fundamental task in interactive Augmented Reality (AR) systems. Predicting the intended target of selection in such systems can provide users with a smooth, low-friction interaction experience. Our work aims to predict gaze-based target selection in AR headsets with eye and head endpoint distributions, which describe the probability distribution of eye and head 3D orientation when a user triggers a selection input. We first conducted a user study to collect users’ eye and head behavior in a gaze-based pointing selection task with two confirmation mechanisms (air tap and blinking). Based on the study results, we then built two models: a unimodal model using only eye endpoints and a multimodal model using both eye and head endpoints. Results from a second user study showed that the pointing accuracy is improved by approximately 32% after integrating our models into gaze-based selection techniques.
Ice-Breaking Technology: Robots and Computers Can Foster Meaningful Connections between Strangers through In-Person Conversations
Alex Wuqi Zhang (University of Chicago, Chicago, Illinois, United States)Ting-Han Lin (University of Chicago, Chicago, Illinois, United States)Xuan Zhao (Stanford University, Stanford, California, United States)Sarah Sebo (University of Chicago, Chicago, Illinois, United States)
Despite the clear benefits that social connection offers to well-being, strangers in close physical proximity regularly ignore each other due to their tendency to underestimate the positive consequences of social connection. In a between-subjects study (N = 49 pairs, 98 participants), we investigated the effectiveness of a humanoid robot, a computer screen, and a poster at stimulating meaningful, face-to- face conversations between two strangers by posing progressively deeper questions. We found that the humanoid robot facilitator was able to elicit the greatest compliance with the deep conversation questions. Additionally, participants in conversations facilitated by either the humanoid robot or the computer screen reported greater happiness and connection to their conversation partner than those in conversations facilitated by a poster. These results suggest that technology-enabled conversation facilitators can be useful in breaking the ice between strangers, ultimately helping them develop closer connections through face-to-face conversations and thereby enhance their overall well-being.
Vergence Matching: Inferring Attention to Objects in 3D Environments for Gaze-Assisted Selection
Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Christopher Clarke (University of Bath, Bath, United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Mathias N.. Lystbæk (Aarhus University, Aarhus, Denmark)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is limited by motor and sensor limitations. To circumvent these limitations, we propose a vergence-based motion correlation method to detect visual attention toward very small targets. Smooth depth movements relative to the user are induced on 3D objects, which cause slow vergence eye movements when looked upon. Using the principle of motion correlation, the depth movements of the object and vergence eye movements are matched to determine which object the user is focussing on. In two user studies, we demonstrate how the technique can reliably infer gaze attention on very small targets, systematically explore how different stimulus motions affect attention detection, and show how the technique can be extended to multi-target selection. Finally, we provide example applications using the concept and design guidelines for small target and accuracy-independent attention detection in 3D environments.
Meeting Your Virtual Twin: Effects of Photorealism and Personalization on Embodiment, Self-Identification and Perception of Self-Avatars in Virtual Reality
Anca Salagean (University of Bath, Bath, United Kingdom)Eleanor Crellin (University of Bath, Bath, United Kingdom)Martin Parsons (University of Bath, Bath, United Kingdom)Darren Cosker (Microsoft Research, Cambridge, United Kingdom)Danaë Stanton Fraser (University of Bath, Bath, United Kingdom)
Embodying virtual twins – photorealistic and personalized avatars – will soon be easily achievable in consumer-grade VR. For the first time, we explored how photorealism and personalization impact self-identification, as well as embodiment, avatar perception and presence. Twenty participants were individually scanned and, in a two-hour session, embodied four avatars (high photorealism personalized, low photorealism personalized, high photorealism generic, low photorealism generic). Questionnaire responses revealed stronger mid-immersion body ownership for the high photorealism personalized avatars compared to all other avatar types, and stronger embodiment for high photorealism compared to low photorealism avatars and for personalized compared to generic avatars. In a self-other face distinction task, participants took significantly longer to pause the face morphing videos of high photorealism personalized avatars, suggesting a stronger self-identification bias with these avatars. Photorealism and personalization were perceptually positive features; how employing these avatars in VR applications impacts users over time requires longitudinal investigation.
A Fitts' Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces
Uta Wagner (Aarhus University, Aarhus N, Denmark)Mathias N.. Lystbæk (Aarhus University, Aarhus, Denmark)Pavel Manakhov (Aarhus University, Aarhus, Denmark)Jens Emil Sloth. Grønbæk (Aarhus University, Aarhus, Denmark)Ken Pfeuffer (Aarhus University, Aarhus, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Gaze-Hand Alignment has recently been proposed for multimodal selection in 3D. The technique takes advantage of gaze for target pre-selection, as it naturally precedes manual input. Selection is then completed when manual input aligns with gaze on the target, without need for an additional click method. In this work we evaluate two alignment techniques, Gaze&Finger and Gaze&Handray, combining gaze with image plane pointing versus raycasting, in comparison with hands-only baselines and Gaze&Pinch as established multimodal technique. We used Fitts' Law study design with targets presented at different depths in the visual scene, to assess effect of parallax on performance. The alignment techniques outperformed their respective hands-only baselines. Gaze&Finger is efficient when targets are close to the image plane but less performant with increasing target depth due to parallax.
Marking Material Interactions with Computer Vision
Peter Gyory (University of Colorado Boulder, Boulder, Colorado, United States)S. Sandra Bae (University of Colorado Boulder, Boulder, Colorado, United States)Ruhan Yang (University of Colorado Boulder, Boulder, Colorado, United States)Ellen Yi-Luen Do (University of Colorado Boulder, Boulder, Colorado, United States)Clement Zheng (National University of Singapore, Singapore, Singapore, Singapore)
The electronics-centered approach to physical computing presents challenges when designers build tangible interactive systems due to its inherent emphasis on circuitry and electronic components. To explore an alternative physical computing approach we have developed a computer vision (CV) based system that uses a webcam, computer, and printed fiducial markers to create functional tangible interfaces. Through a series of design studios, we probed how designers build tangible interfaces with this CV-driven approach. In this paper, we apply the annotated portfolio method to reflect on the fifteen outcomes from these studios. We observed that CV markers offer versatile materiality for tangible interactions, afford the use of democratic materials for interface construction, and engage designers in embodied debugging with their own vision as a proxy for CV. By sharing our insights, we inform other designers and educators who seek alternative ways to facilitate physical computing and tangible interaction design.
Memory Manipulations in Extended Reality
Elise Bonnail (Institut Polytechnique de Paris, Paris, France)Wen-Jie Tseng (Institut Polytechnique de Paris, Paris, France)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Eric Lecolinet (Institut Polytechnique de Paris, Paris, France)Samuel Huron (Télécom Paris, Institut Polytechnique de Paris, Palaiseau, ile de France, France)Jan Gugenheimer (TU-Darmstadt, Darmstadt, Germany)
Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR, VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR.
“I normally wouldn't talk with strangers”: Introducing a Socio-Spatial Interface for Fostering Togetherness Between Strangers
Ge Guo (Cornell University, Ithaca, New York, United States)Gilly Leshed (Cornell University, Ithaca, New York, United States)Keith Evan. Green (Cornell University, Ithaca, New York, United States)
Interacting with strangers can be beneficial but also challenging. Fortunately, these challenges can lead to design opportunities. In this paper, we present the design and evaluation of a socio-spatial interface, SocialStools, that leverages the human propensity for embodied interaction to foster togetherness between strangers. SocialStools is an installation of three responsive stools on caster wheels that generate sound and imagery in the near environment as three strangers sit on them, move them, and rotate them relative to each other. In our study with 12 groups of three strangers, we found a sense of togetherness emerged through interaction, evidenced by different patterns of socio-spatial movements, verbal communication, non-verbal behavior, and interview responses. We present our findings, articulate reasons for the cultivation of togetherness, consider the unique social affordances of our spatial interface in shifting attention during interpersonal communication, and provide design implications. This research contributes insights toward designing cyber-physical interfaces that foster interaction and togetherness among strangers at a time when cultivating togetherness is especially critical.
Humorous Robotic Behavior as a New Approach to Mitigating Social Awkwardness
Viva Sarah. Press (Reichman University, Herzliya, Israel)Hadas Erel (Reichman University, Herzliya, Israel)
Social awkwardness is a frequent challenge to healthy social interactions and can dramatically impact how people feel, communicate and behave. It is known that humor can invoke positive feelings and enable people to modify perspective of a situation. We explored whether using a non-humanoid robotic object performing humorous behavior can reduce social awkwardness between two strangers. The robot was peripherally incorporated into the interaction to ensure the natural social flow. We compared the impact of humorous and non-humorous robotic gestures on the human-human interaction. Objective and subjective measures indicate that despite being peripheral to the human-human interaction, the humorous robotic gestures significantly reduced the intensity of awkwardness between the strangers. Our findings suggest humorous robotic behavior can be used to enhance interpersonal relationships hindered by awkwardness and still preserve natural human-human interaction.
BubbleTex: Designing Heterogenous Wettable Areas for Carbonation Bubble Patterns on Surfaces
Harpreet Sareen (The University of Tokyo, Tokyo, Japan)Yibo Fu (The New School, New York, New York, United States)Nour Boulahcen (Telecom ParisTech, Paris, France)Yasuaki Kakehi (The University of Tokyo, Tokyo, Japan)
Materials are a key part of our daily experiences. Recently, researchers have been devising new ways to utilize materials directly from our physical world for the design of objects and interactions. We present a new fabrication technique that enables control of CO2 bubble positions and their size within carbonated liquids. Instead of soap bubbles, boiling water, or droplets, we show creation of patterns, images and text through sessile bubbles that exhibit a lifetime of several days. Surfaces with mixed wettability regions are created on glass and plastic using ceramic coatings or plasma projection leading to patterns that are relatively invisible to the human eye. Different regions react to liquids differently. Nucleation is activated after carbonated liquid is poured onto the surface with bubbles nucleating in hydrophobic regions with a strong adherence to the surface and can be controlled in size ranging from 0.5mm – 6.5mm. Bubbles go from initially popping or becoming buoyant during CO2 supersaturation to stabilizing at their positions within minutes. Technical evaluation shows stabilization under various conditions. Our design software allows users to import images and convert them into parametric pixelation forms conducive to fabrication that will result in nucleation of bubbles at required positions. Various applications are presented to demonstrate aspects that may be harnessed for a wide range of use in daily life. Through this work, we enable the use of carbonation bubbles as a new design material for designers and researchers.
Characteristics of Deep and Skim Reading on Smartphones vs. Desktop: A Comparative Study
Xiuge Chen (The University of Melbourne, Melbourne, Victoria, Australia)Namrata Srivastava (Monash University, Melbourne, Victoria, Australia)Rajiv Jain (Adobe Research, College Park, Maryland, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Tilman Dingler (University of Melbourne, Melbourne, Victoria, Australia)
Deep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.
Exploring Memory-Oriented Interactions with Digital Photos In and Across Time: A Field Study of Chronoscope
Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Carman Neustaedter (Simon Fraser University, Surrey, British Columbia, Canada)Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Henry Lin (Simon Fraser University, Surrey, British Columbia, Canada)
We describe a field study of Chronoscope, a tangible photo viewer that lets people revisit and explore their digital photos with the support of temporal metadata. Chronoscope offers different temporal modalities for organizing one’s personal digital photo archive, and for exploring possible connections in and across time, and among photos and memories. We deployed four Chronoscopes in four households for three months to understand participants’ experiences over time. Our goals are to investigate the reflective potential of temporal modalities as an alternative design approach for supporting memory-oriented photo exploration, and empirically explore conceptual propositions related to slow technology. Findings revealed that Chronoscope catalyzed a range of reflective experiences on their respective life histories and life stories. It opened up alternative ways of considering time and the potential longevity of personal photo archives. We conclude with implications to present opportunities for future HCI research and practice.
The Walking Talking Stick: Understanding Automated Note-Taking in Walking Meetings
Luke Haliburton (LMU Munich, Munich, Germany)Natalia Bartłomiejczyk (Lodz University of Technology, Lodz, Poland)Albrecht Schmidt (LMU Munich, Munich, Germany)Paweł W. Woźniak (Chalmers University of Technology, Gothenburg, Sweden)Jasmin Niess (University of St. Gallen, St. Gallen, Switzerland)
While walking meetings offer a healthy alternative to sit-down meetings, they also pose practical challenges. Taking notes is difficult while walking, which limits the potential of walking meetings. To address this, we designed the Walking Talking Stick---a tangible device with integrated voice recording, transcription, and a physical highlighting button to facilitate note-taking during walking meetings. We investigated our system in a three-condition between-subjects user study with thirty pairs of participants (N=60) who conducted 15-minute outdoor walking meetings. Participants either used clip-on microphones, the prototype without the button, or the prototype with the highlighting button. We found that the tangible device increased task focus, and the physical highlighting button facilitated turn-taking and resulted in more useful notes. Our work demonstrates how interactive artifacts can incentivize users to hold meetings in motion and enhance conversation dynamics. We contribute insights for future systems which support conducting work tasks in mobile environments
Exploring Long-Term Mediated Relations with a Shape-Changing Thing: A Field Study of coMorphing Stool
Ce Zhong (Simon Fraser University, Surrey, British Columbia, Canada)Ron Wakkary (Simon Fraser University, Surrey, British Columbia, Canada)William Odom (Simon Fraser University, Surrey, British Columbia, Canada)Mikael Wiberg (Umeå University, Umeå, Sweden)Amy Yo Sue Chen (Simon Fraser University, Surrey, British Columbia, Canada)Doenja Oogjes (Simon Fraser University, Vancouver, British Columbia, Canada)Jordan White (Simon Fraser University, Surrey, British Columbia, Canada)MinYoung Yoo (Simon Fraser University, Surrey, British Columbia, Canada)
This paper presents a long-term field study of the coMorphing stool: a computational thing that can change shape in response to the surrounding light. We deployed 5 coMorphing stools to 5 participants’ homes over 9 months. As co-speculators, the participants reflected on their mediated relations with the coMorphing stool. Findings suggest that they perceived the subtle transformations of the coMorphing stool in the early days of the deployment. After becoming familiar with these features, they interpreted their daily entanglements with the coMorphing stool in diverse personalized ways. Over time, the co-speculators accepted the coMorphing stool as part of their homes. These findings contribute new empirical insights to the shape-changing research field in HCI and enrich discussions on higher-level concepts in postphenomenology. Reflecting on these experiences promotes further HCI explorations on computational things.
The "Conversation" about Loss: Understanding How Chatbot Technology was Used in Supporting People in Grief.
Anna Xygkou (University of Kent, Canterbury, United Kingdom)Panote Siriaraya (Kyoto Institute of Technology, Kyoto, Japan)Alexandra Covaci (University of Kent, Canterbury, United Kingdom)Holly Gwen. Prigerson (Weill Cornell Medicine, New York, New York, United States)Robert Neimeyer (University of Memphis, Memphis, Tennessee, United States)Chee Siang Ang (University of Kent, Canterbury, KENT, United Kingdom)Wan-Jou She (Nara institute of Science and Technology, Ikoma City, Nara, Japan)
While conversational agents have traditionally been used for simple tasks such as scheduling meetings and customer service support, recent advancements have led researchers to examine their use in complex social situations, such as to provide emotional support and companionship. For mourners who could be vulnerable to the sense of loneliness and disruption of self-identity, such technology offers a unique way to help them cope with grief. In this study, we explore the potential benefits and risks of such a practice, through semi-structured interviews with 10 mourners who actively used chatbots at different phases of their loss. Our findings indicated seven approaches in which chatbots were used to help people cope with grief, by taking the role of listener, acting as a simulation of the deceased, romantic partner, friend and emotion coach. We then highlight how interacting with the chatbots impacted mourners’ grief experience, and conclude the paper with further research opportunities.
Here and Now: Creating Improvisational Dance Movements with a Mixed Reality Mirror
Qiushi Zhou (University of Melbourne, Melbourne, Victoria, Australia)Louise Grebel (Université Paris-Saclay, Orsay, France)Andrew Irlitti (University of Melbourne, Melbourne, Australia)Julie Ann Minaai (The University of Melbourne, Southbank, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Eduardo Velloso (University of Melbourne, Melbourne, Victoria, Australia)
This paper explores using mixed reality (MR) mirrors for supporting improvisational dance making. Motivated by the prevalence of mirrors in dance studios and inspired by Forsythe’s Improvisation Technologies, we conducted workshops with 13 dancers and choreographers to inform the design of future MR visualisation and annotation tools for dance. The workshops involved using a prototype MR mirror as a technology probe that reveals the spatial and temporal relationships between the reflected dancing body and its surroundings during improvisation; speed dating group interviews around future design ideas; follow-up surveys and extended interviews with a digital media dance artist and a dance educator. Our findings highlight how the MR mirror enriches dancers' temporal and spatial perception, creates multi-layered presence, and affords appropriation by dancers. We also discuss the unique place of MR mirrors in the theoretical context of dance and in the history of movement visualisation, and distil lessons for broader HCI research.
Morphing Identity: Exploring Self-Other Identity Continuum through Interpersonal Facial Morphing Experience
Kye Shimizu (Sony Computer Science Laboratories, Inc, Tokyo, Japan)Santa Naruse (Sony Computer Science Laboratories, Inc., Tokyo, Japan)Jun Nishida (Sony Computer Science Laboratories, Inc., Tokyo, Japan)Shunichi Kasahara (Sony Computer Science Laboratories, Inc., Tokyo, Japan)
We explored continuous changes in self-other identity by designing an interpersonal facial morphing experience where the facial images of two users are blended and then swapped over time. Both users' facial images are displayed side by side, with each user controlling their own morphing facial images, allowing us to create and investigate a multifaceted interpersonal experience. To explore this with diverse social relationships, we conducted qualitative and quantitative investigations through public exhibitions. We found that there is a window of self-identification as well as a variety of interpersonal experiences in the facial morphing process. From these insights, we synthesized a Self-Other Continuum represented by a sense of agency and facial identity. This continuum has implications in terms of the social and subjective aspects of interpersonal communication, which enables further scenario design and could complement findings from research on interactive devices for remote communication.
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis
Riccardo Bovo (Imperial College London, London, United Kingdom)Daniele Giunchi (University College London, London, United Kingdom)Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Hans Gellersen (Aarhus University, Aarhus, Denmark)Enrico Costanza (UCL Interaction Centre, London, United Kingdom)Thomas Heinis (Imperial College, London, United Kingdom)
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
Quantified Canine: Inferring Dog Personality From Wearables
Lakmal Meegahapola (Idiap Research Institute, Martigny, Switzerland)Marios Constantinides (Nokia Bell Labs, Cambridge, United Kingdom)Zoran Radivojevic (Nokia Bell Labs, Cambridge, United Kingdom)Hongwei Li (Nokia Bell Labs, Cambridge, United Kingdom)Daniele Quercia (Nokia Bell Labs, Cambridge, United Kingdom)Michael S. Eggleston (Nokia Bell Labs, Murray Hill, New Jersey, United States)
Being able to assess dog personality can be used to, for example, match shelter dogs with future owners, and personalize dog activities. Such an assessment typically relies on experts or psychological scales administered to dog owners, both of which are costly. To tackle that challenge, we built a device called ``Patchkeeper'' that can be strapped on the pet's chest and measures activity through an accelerometer and a gyroscope. In an in-the-wild deployment involving 12 healthy dogs, we collected 1300 hours of sensor activity data and dog personality test results from two validated questionnaires. By matching these two datasets, we trained ten machine learning classifiers that predicted dog personality from activity data, achieving AUCs in [0.63-0.90], suggesting the value of tracking psychological signals of pets using wearable technologies.
Lyric App Framework: A Web-based Framework for Developing Interactive Lyric-driven Musical Applications
Jun Kato (National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan)Masataka Goto (National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan)
Lyric videos have become a popular medium to convey lyrical content to listeners, but they present the same content whenever they are played and cannot adapt to listeners' preferences. Lyric apps, as we name them, are a new form of lyric-driven visual art that can render different lyrical content depending on user interaction and address the limitations of static media. To open up this novel design space for programmers and musicians, we present Lyric App Framework, a web-based framework for building interactive graphical applications that play musical pieces and show lyrics synchronized with playback. We designed the framework to provide a streamlined development experience for building production-ready lyric apps with creative coding libraries of choice. We held programming contests twice and collected 52 examples of lyric apps, enabling us to reveal eight representative categories, confirm the framework's effectiveness, and report lessons learned.
Emotion AI at Work: Implications for Workplace Surveillance, Emotional Labor, and Emotional Privacy
Kat Roemmich (University of Michigan, Ann Arbor, Michigan, United States)Florian Schaub (University of Michigan, Ann Arbor, Michigan, United States)Nazanin Andalibi (University of Michigan, Ann Arbor, Michigan, United States)
Workplaces are increasingly adopting emotion AI, promising benefts to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and defne an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace.
Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input
Baosheng James HOU (Lancaster University , Lancaster , United Kingdom)Joshua Newn (Lancaster University, Lancaster, Lancashire, United Kingdom)Ludwig Sidenmark (Lancaster University, Lancaster, United Kingdom)Anam Ahmad Khan (National University of Science and Technology, ISLAMABAD, Pakistan)Per Bækgaard (Technical University of Denmark, Kgs. Lyngby, Denmark)Hans Gellersen (Lancaster University, Lancaster, United Kingdom)
Head movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation.
Breaking Out of the Ivory Tower: A Large-scale Analysis of Patent Citations to HCI Research
Hancheng Cao (Stanford University, Stanford, California, United States)Yujie Lu (NLP, Santa Barbara, California, United States)Yuting Deng (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Daniel McFarland (Stanford University, Stanford, California, United States)Michael S.. Bernstein (Stanford University, Stanford, California, United States)
What is the impact of human-computer interaction research on industry? While it is impossible to track all research impact pathways, the growing literature on translational research impact measurement offers patent citations as one measure of how industry recognizes and draws on research in its inventions. In this paper, we perform a large-scale measurement study primarily of 70,000 patent citations to premier HCI research venues, tracing how HCI research are cited in United States patents over the last 30 years. We observe that 20.1% of papers from these venues, including 60--80% of papers at UIST and 13% of papers in a broader dataset of SIGCHI-sponsored venues overall, are cited by patents -- far greater than premier venues in science overall (9.7%) and NLP (11%). However, the time lag between a patent and its paper citations is long (10.5 years) and getting longer, suggesting that HCI research and practice may not be efficiently connected.
transPAF: Rendering Omnidirectional Impact Feedback with Dynamic Point of Application of Force All Round a Controller
Hong-Xian Chen (National Chengchi University, Taipei, Taiwan)Shih-Kang Chiu (National Chengchi University, Taipei, Taiwan)Chi-Ching Wen (National Cheng Chi University , Taipei, Taiwan)Hsin-Ruey Tsai (National Chengchi University, Taipei, Taiwan)
Impact is common feedback on virtual reality (VR) controllers. It applies to different points of application of force (PAFs) and directions in varied scenarios, e.g., using a sword and pickaxe, stabbing and slashing with a sword, or balls flying and hitting a racket in different directions. Therefore, rendering dynamic PAF and force direction is essential. We propose transPAF to render omnidirectional impact feedback with dynamic PAF all round the controller. transPAF consists of a controller, semicircular track, linear track, and impactor, which are all rotatable. The impactor can move to any position in a sphere around the controller and rotate in any direction. Therefore, dynamic PAF and force direction are achieved and independent to each other. We conducted a just-noticeable difference (JND) study to understand users’ distinguishability in position and direction, separately, and a VR study to verify that the feedback with dynamic PAF and force direction enhances VR realism.
PEARL: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis
Weizhou Luo (Technische Universität Dresden, Dresden, Germany)Zhongyuan Yu (TU Dresden, Dresden, Germany)Rufat Rzayev (Technische Universität Dresden, Dresden, Germany)Marc Satkowski (Technische Universität Dresden, Dresden, Germany)Stefan Gumhold (TU Dresden, Dresden, Germany)Matthew McGinity (Technische Universität Dresden, Dresden, Germany)Raimund Dachselt (Technische Universität Dresden, Dresden, Germany)
This paper presents PEARL, a mixed-reality approach for the analysis of human movement data in situ. As the physical environment shapes human motion and behavior, the analysis of such motion can benefit from the direct inclusion of the environment in the analytical process. We present methods for exploring movement data in relation to surrounding regions of interest, such as objects, furniture, and architectural elements. We introduce concepts for selecting and filtering data through direct interaction with the environment, and a suite of visualizations for revealing aggregated and emergent spatial and temporal relations. More sophisticated analysis is supported through complex queries comprising multiple regions of interest. To illustrate the potential of PEARL, we developed an Augmented Reality-based prototype and conducted expert review sessions and scenario walkthroughs in a simulated exhibition. Our contribution lays the foundation for leveraging the physical environment in the in-situ analysis of movement data.
Negotiating Experience and Communicating Information Through Abstract Metaphor
Courtney N. Reed (Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany)Paul Strohmeier (Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany)Andrew P. McPherson (Queen Mary University of London, London, United Kingdom)
An implicit assumption in metaphor use is that it requires grounding in a familiar concept, prominently seen in the popular Desktop Metaphor. In human-to-human communication, however, abstract metaphors, without such grounding, are often used with great success. To understand when and why metaphors work, we present a case study of metaphor use in voice teaching. Voice educators must teach about subjective, sensory experiences and rely on abstract metaphor to express information about unseen and intangible processes inside the body. We present a thematic analysis of metaphor use by 12 voice teachers. We found that metaphor works not because of strong grounding in the familiar, but because of its ambiguity and flexibility, allowing shared understanding between individual lived experiences. We summarise our findings in a model of metaphor-based communication. This model can be used as an analysis tool within the existing taxonomies of metaphor in user interaction for better understanding why metaphor works in HCI. It can also be used as a design resource for thinking about metaphor use and abstracting metaphor strategies from both novel and existing designs.
Gaming for Post-Work Recovery: The Role of Immersion
Jon Mella (University College London, London, United Kingdom)Ioanna Iacovides (University of York, York, United Kingdom)Anna L. Cox (University College London, London, United Kingdom)
Playing digital games can be an effective means of recovering from daily work strain. However, limited research has examined which player experiences contribute to this process, limiting the ability of players to select games and play them in a manner which helps them recover effectively. Hence, this paper reports a mixed-methods survey study investigating how a recent post-work recovery episode was impacted by immersion: a player experience which has been implicated in theoretical accounts relating games and recovery. We found that particular dimensions of immersion, such as cognitive involvement, support specific post-work recovery needs. Moreover, participants report not only experiencing benefits in a passive manner, but actively optimising their levels of immersion to achieve recovery. This study extends previous research by improving our understanding of how digital games support post-work recovery and by demonstrating that immersion is key in determining the restorative potential of digital games.
“What if everyone is able to program?” – Exploring the Role of Software Development in Science Fiction
Kevin Krings (University of Siegen, Siegen, Germany)Nino S.. Bohn (University of Siegen, Siegen, Germany)Nora Anna Luise. Hille (University of Siegen, Siegen, Germany)Thomas Ludwig (University of Siegen, Siegen, Germany)
For decades, research around emerging technologies has been inspired by science fiction and vice versa. While so far almost only the technologies themselves have been considered, we explore the underlying software development and programming approaches. We therefore conduct a detailed media content analysis of twenty-seven movies that examines the role of software development in science fiction by identifying and investigating new approaches to programming and how software development is conceptualized portrayed within science fiction scenes. With the additional analysis of eighteen design fiction stories exploring the scenario “What if everyone is able to program?”, we envision potential impacts of the democratization of software development on business and society. Our study opens new discussions and perspectives, by investigating the current vision of the future of programming and uncovers new approaches to software development which can serve as a starting point for further research in the HCI community.
Drifting Off in Paradise: Why People Sleep in Virtual Reality
Michael Yin (University of British Columbia, Vancouver, British Columbia, Canada)Robert Xiao (University of British Columbia, Vancouver, British Columbia, Canada)
Sleep is important for humans, and past research has considered methods of improving sleep through technologies such as virtual reality (VR). However, there has been limited research on how such VR technology may affect the experiential and practical aspects of sleep, especially outside of a clinical lab setting. We consider this research gap through the lens of individuals that voluntarily engage in the practice of sleeping in VR. Semi-structured interviews with 14 participants that have slept in VR reveal insights regarding the motivations, actions, and experiential factors that uniquely define this practice. We find that participant motives can be largely categorized through either the experiential or social affordances of VR. We tie these motives into findings regarding the unique customs of sleeping in VR, involving set-up both within the physical and virtual space. Finally, we identify current and future challenges for sleeping in VR, and propose prospective design directions.
Z-Ring: Single-point Bio-impedance Sensing for Gesture, Touch, Object and User Recognition
Anandghan Waghmare (University of Washington, Seattle, Washington, United States)Youssef Ben Taleb (University of Washington, Seattle, Washington, United States)Ishan Chatterjee (University of Washington, Seattle, Washington, United States)Arjun Narendra (University of Washington, Seattle, Seattle, Washington, United States)Shwetak Patel (University of Washington, Seattle, Washington, United States)
We present Z-Ring, a wearable ring that enables gesture input, object detection, user identification, and interaction with passive user interface (UI) elements using a single sensing modality and a single point of instrumentation on the finger. Z-Ring uses active electrical field sensing to detect changes in the hand's electrical impedance caused by finger motions or contact with external surfaces. We develop a diverse set of interactions and evaluate them with 21 users. We demonstrate: (1) Single- and two-handed gesture recognition with up to 93\% accuracy (2) Tangible input with a set of passive touch UI elements, including buttons, a continuous 1D slider, and a continuous 2D trackpad with 91.8\% accuracy, <4.4 cm MAE, and <4.1cm MAE, respectively (3) Object recognition across six household objects with 94.5\% accuracy (4) User identification among 14 users with 99\% accuracy. Z-Ring's sensing methodology uses only a single co-located electrode pair for both receiving and sensing, lending itself well to future miniaturization for use in on-the-go scenarios.
Feel the Force, See the Force: Exploring Visual-tactile Associations of Deformable Surfaces with Colours and Shapes
Cameron Steer (University of Bath, Bath, United Kingdom)Teodora Dinca (University of Bath, Bath, United Kingdom)Crescent Jicol (University of Bath, Bath, United Kingdom)Michael J. Proulx (University of Bath, Bath, United Kingdom)Jason Alexander (University of Bath, Bath, United Kingdom)
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface. However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components. In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes. We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes. Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
WESPER: Zero-shot and Realtime Whisper to Normal Voice Conversion for Whisper-based Speech interactions
Jun Rekimoto (The University of Tokyo, Tokyo, Japan)
Recognizing whispered speech and converting it to normal speech creates many possibilities for speech interaction. Because the sound pressure of whispered speech is significantly lower than that of normal speech, it can be used as a semi-silent speech interaction in public places without being audible to others. Converting whispers to normal speech also improves the speech quality for people with speech or hearing impairments. However, conventional speech conversion techniques do not provide sufficient conversion quality or require speaker-dependent datasets consisting of pairs of whispered and normal speech utterances. To address these problems, we propose WESPER, a zero-shot, real-time whisper-to-normal speech conversion mechanism based on self-supervised learning. WESPER consists of a speech-to-unit encoder, which generates hidden speech units common to both whispered and normal speech, and a unit-to-speech (UTS) decoder, which reconstructs speech from the encoded speech units. Unlike the existing methods, this conversion is user-independent and does not require a paired dataset for whispered and normal speech. The UTS decoder can reconstruct speech in any target speaker's voice from speech units, and it requires only an unlabeled target speaker's speech data. We confirmed that the quality of the speech converted from a whisper was improved while preserving its natural prosody. Additionally, we confirmed the effectiveness of the proposed approach to perform speech reconstruction for people with speech or hearing disabilities.
UEyes: Understanding Visual Saliency across User Interface Types
Yue Jiang (Aalto University, Espoo, Finland)Luis A.. Leiva (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Hamed Rezazadegan Tavakoli (Nokia Technologies, Espoo, Finland)Paul R. B. Houssel (University of Luxembourg, Esch-sur-Alzette, Luxembourg)Julia Kylmälä (Aalto University, Espoo, Finland)Antti Oulasvirta (Aalto University, Helsinki, Finland)
While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, \textit{UEyes} (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.
Message Ritual: A Posthuman Account of Living with Lamp
Nina Rajcic (Monash University, Melbourne, VIC, Australia)Jon McCormack (Monash University, Melbourne, Victoria, Australia)
As we become increasingly entangled with digital technologies, the boundary between human and machine is progressively blurring. Adopting a performative, posthumanist perspective resolves this ambiguity by proposing that such boundaries are not predetermined, rather they are enacted within a certain material configuration. Using this approach, dubbed `Entanglement HCI', this paper presents \emph{Message Ritual} -- a novel, integrated AI system that encourages the re-framing of memory through machine generated poetics. Embodied within a domestic table lamp, the system listens in on conversations occurring within the home, drawing out key topics and phrases of the day and reconstituting them through machine generated poetry, delivered to household members via SMS upon waking each morning. Participants across four households were asked to live with the lamp over a two week period. We present a diffractive analysis exploring how the lamp \emph{becomes with} participants and discuss the implications of this method for future HCI research.
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents
Kadek Ananta Satriadi (University of South Australia, Adelaide, Australia)Andrew Cunningham (University of South Australia, Adelaide, Australia)Ross T. Smith (University of South Australia, Adelaide, Australia)Tim Dwyer (Monash University, Melbourne, Australia)Adam Mark. Drogemuller (University of South Australia, Adelaide, Australia)Bruce H. Thomas (University of South Australia, Mawson Lakes, South Australia, Australia)
Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.
No Pie in the (Digital) Sky: Co-Imagining the Food Metaverse
Alexandra Covaci (University of Kent, Canterbury, Kent, United Kingdom)Khawla Alhasan (University of Kent, Canterbury, Kent, United Kingdom)Mayank Loonker (University of Kent, Canterbury, United Kingdom)Bernardine Farrell (University of Kent, Canterbury, United Kingdom)Luma Tabbaa (University of Kent, Canterbury, Kent, United Kingdom)Sophia Ppali (University of Kent, Canterbury, United Kingdom)Chee Siang Ang (University of Kent, Canterbury, KENT, United Kingdom)
Human behaviour and habits co-evolve with technology, and the metaverse is poised to become a key player in reshaping how we live our everyday life. Given the importance of food in our daily lives, we ask: how will our relationships with food be transformed by the metaverse, and what are the promises and pitfalls of this technology? To answer this, we propose a co-design study that reveals the important elements people value in their daily interactions with food. We then present a speculative catalogue of novel metaverse food experiences, and insights from discussing these ideas with food designers, anthropologists and metaverse experts. Our work aims to provide designers with inspirations for building a metaverse that: provides inclusive opportunities for the future of food; helps re-discover the forgotten or lost knowledge about food; facilitates the exploration, excitement and joy of eating; and reinvigorates the ways that food can soothe and heal.
The Impact of Navigation Aids on Search Performance and Object Recall in Wide-Area Augmented Reality
Radha Kumaran (University of California, Santa Barbara, Santa Barbara, California, United States)You-Jin Kim (University of California, Santa Barbara, Santa Barbara, California, United States)Anne Milner (University of California, Santa Barbara, Santa Barbara, California, United States)Tom Bullock (University of California, Santa Barbara, Santa Barbara, California, United States)Barry Giesbrecht (University of California, Santa Barbara, Santa Barbara, California, United States)Tobias Höllerer (University of California, Santa Barbara, Santa Barbara, California, United States)
Head-worn augmented reality (AR) is a hotly pursued and increasingly feasible contender paradigm for replacing or complementing smartphones and watches for continual information consumption. Here, we compare three different AR navigation aids (on-screen compass, on-screen radar and in-world vertical arrows) in a wide-area outdoor user study (n=24) where participants search for hidden virtual target items amongst physical and virtual objects. We analyzed participants’ search task performance, movements, eye-gaze, survey responses and object recall. There were two key findings. First, all navigational aids enhanced search performance relative to a control condition, with some benefit and strongest user preference for in-world arrows. Second, users recalled fewer physical objects than virtual objects in the environment, suggesting reduced awareness of the physical environment. Together, these findings suggest that while navigational aids presented in AR can enhance search task performance, users may pay less attention to the physical environment, which could have undesirable side-effects.
Embodying Physics-Aware Avatars in Virtual Reality
Yujie Tao (Stanford University, Stanford, California, United States)Cheng Yao Wang (Cornell University, Ithaca, New York, United States)Andrew D. Wilson (Microsoft Research, Redmond, Washington, United States)Eyal Ofek (Microsoft Research, Redmond, Washington, United States)Mar Gonzalez-Franco (Microsoft Research, Redmond, Washington, United States)
Embodiment toward an avatar in virtual reality (VR) is generally stronger when there is a high degree of alignment between the user's and self-avatar's motion. However, one-to-one mapping between the two is not always ideal when user interacts with the virtual environment. On these occasions, the user input often leads to unnatural behavior without physical realism (e.g., objects penetrating virtual body, body unmoved by hitting stimuli). We investigate how adding physics correction to self-avatar motion impacts embodiment. Physics-aware self-avatar preserves the physical meaning of the movement but introduces discrepancies between the user's and self-avatar's motion, whose contingency is a determining factor for embodiment. To understand its impact, we conducted an in-lab study (n = 20) where participants interacted with obstacles on their upper bodies in VR with and without physics correction. Our results showed that, rather than compromising embodiment level, physics-responsive self-avatar improved embodiment compared to no-physics condition in both active and passive interactions.
Crafting Interactive Circuits on Glazed Ceramic Ware
Clement Zheng (National University of Singapore, Singapore, Singapore)Bo Han (National University of Singapore, Singapore, Singapore)Xin Liu (National University of Singapore, Singapore, Singapore)Laura Devendorf (University of Colorado Boulder, Boulder, Colorado, United States)Hans Tan (National University of Singapore, Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
Glazed ceramic is a versatile material that we use every day. In this paper, we present a new approach that instruments existing glazed ceramic ware with interactive electronic circuits. We informed this work by collaborating with a ceramics designer and connected his craft practice to our experience in physical computing. From this partnership, we developed a systematic approach that begins with the subtractive fabrication of traces on glazed ceramic surfaces via the resist-blasting technique, followed by applying conductive ink into the inlaid traces. We capture and detail this approach through an annotated flowchart for others to refer to, as well as externalize the material insights we uncovered through ceramic and circuit swatches. We then demonstrate a range of interactive home applications built with this approach. Finally, we reflect on the process we took and discuss the importance of collaborating with craftspeople for material-driven research within HCI.
JumpMod: Haptic Backpack that Modifies Users’ Perceived Jump
Romain Nith (University of Chicago, Chicago, Illinois, United States)Jacob Serfaty (University of Chicago, Chicago, Illinois, United States)Samuel G. Shatzkin (University of Chicago, Chicago, Illinois, United States)Alan Shen (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
Vertical force-feedback is extremely rare in mainstream interactive experiences. This happens because existing haptic devices capable of sufficiently strong forces that would modify a user’s jump require grounding (e.g., motion platforms or pulleys) or cumbersome actuators (e.g., large propellers attached or held by the user). To enable interactive experiences to feature jump-based haptics without sacrificing wearability, we propose JumpMod, an untethered backpack that modifies one’s sense of jumping. JumpMod achieves this by moving a weight up/down along the user’s back, which modifies perceived jump momentum—creating accelerated & decelerated jump sensations. In our second study, we empirically found that our device can render five effects: jump higher, land harder/softer, pulled higher/lower. Based on these, we designed four jumping experiences for VR & sports. Finally, in our third study, we found that participants preferred wearing our device in an interactive context, such as one of our jump-based VR applications.
"I Am a Mirror Dweller": Probing the Unique Strategies Users Take to Communicate in the Context of Mirrors in Social Virtual Reality
Kexue Fu (Hongshen Honors School, Choingqing, China)Yixin Chen (University Of Aberdeen, Aberdeen, United Kingdom)Jiaxun Cao (Duke Kunshan University, Kunshan, Jiangsu, China)Xin Tong (Duke Kunshan University, Kunshan, Suzhou, China)RAY LC (City University of Hong Kong, Hong Kong, Hong Kong)
Increasingly popular social virtual reality (VR) platforms like VRChat created new ways for people to interact with each other, generating dedicated user communities with unique idioms of socializing in an alternative world. In VRChat, users frequently gather in front of mirrors en masse during online interactions. Understanding how user communities deal with the mirror's unique interactions can generate insights for supporting communication in social VR. In this study, we investigated the mirror’s synergistic effect with avatars on behaviors and dedicated user conversational performance. Qualitative findings indicate that avatar-mediated communication through mirrors provides functions like ensuring synchronization of incarnations, increasing immersion, and enhancing idealized embodiment to express bolder behaviors anonymously. Quantitative studies show that while mirrors improve self-perception, it has a potentially adverse effect on conversational performance, similar to the role of self-viewing in video conferencing. Studying how users interact with mirrors in an immersive environment allows us to explore how digital environments affect spatialized interactions when transported from physical to digital domains.
Supporting Piggybacked Co-Located Leisure Activities via Augmented Reality
Samantha Reig (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Erica Principe Cruz (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Melissa Powers (New York University, New York, New York, United States)Jennifer He (Stanford University, Stanford, California, United States)Timothy Chong (University of Washington, Seattle, Washington, United States)Yu Jiang Tham (Snap Inc., Seattle, Washington, United States)Sven Kratz (Snap, Inc., Seattle, Washington, United States)Ava Robinson (Northwestern University, Evanston, Illinois, United States)Brian A.. Smith (Columbia University, New York, New York, United States)Rajan Vaish (Snap Inc., Santa Monica, California, United States)Andrés Monroy-Hernández (Princeton University, Princeton, New Jersey, United States)
Technology, especially the smartphone, is villainized for taking meaning and time away from in-person interactions and secluding people into "digital bubbles''. We believe this is not an intrinsic property of digital gadgets, but evidence of a lack of imagination in technology design. Leveraging augmented reality (AR) toward this end allows us to create experiences for multiple people, their pets, and their environments. In this work, we explore the design of AR technology that "piggybacks'' on everyday leisure to foster co-located interactions among close ties (with other people and pets). We designed, developed, and deployed three such AR applications, and evaluated them through a 41-participant and 19-pet user study. We gained key insights about the ability of AR to spur and enrich interaction in new channels, the importance of customization, and the challenges of designing for the physical aspects of AR devices (e.g., holding smartphones). These insights guide design implications for the novel research space of co-located AR.