注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

9
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pat Pataranutaporn (Massachusetts Institute of Technology, Boston, Massachusetts, United States)Chayapatr Archiwaranguprok (University of the Thai Chamber of Commerce, Bangkok, Thailand)Samantha W. T.. Chan (MIT Media Lab, Cambridge, Massachusetts, United States)Elizabeth Loftus (UC Irvine, Irvine, California, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
6
Sprayable Sound: Exploring the Experiential and Design Potential of Physically Spraying Sound Interaction
Jongik Jeon (KAIST, Deajeon, Korea, Republic of)Chang Hee Lee (KAIST (Korea Advanced Institute of Science and Technology), Daejoen, Korea, Republic of)
Perfume and fragrance have captivated people for centuries across different cultures. Inspired by the ephemeral nature of sprayable olfactory interactions and experiences, we explore the potential of applying a similar interaction principle to the auditory modality. In this paper, we present SoundMist, a sonic interaction method that enables users to generate ephemeral auditory presences by physically dispersing a liquid into the air, much like the fading phenomenon of fragrance. We conducted a study to understand the experiential factors inherent in sprayable sound interaction and held an ideation workshop to identify potential design spaces or opportunities that this interaction could shape. Our findings, derived from thematic analysis, suggest that physically sprayable sound interaction can induce experiences related to four key factors—materiality of sound produced by dispersed liquid particles, different sounds entangled with each liquid, illusive perception of temporally floating sound, and enjoyment derived from blending different sounds—and can be applied to artistic practices, safety indications, multisensory approaches, and emotional interfaces.
6
"It Brought the Model to Life": Exploring the Embodiment of Multimodal I3Ms for People who are Blind or have Low Vision
Samuel Reinders (Monash University, Melbourne, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
3D-printed models are increasingly used to provide people who are blind or have low vision (BLV) with access to maps, educational materials, and museum exhibits. Recent research has explored interactive 3D-printed models (I3Ms) that integrate touch gestures, conversational dialogue, and haptic vibratory feedback to create more engaging interfaces. Prior research with sighted people has found that imbuing machines with human-like behaviours, i.e., embodying them, can make them appear more lifelike, increasing social perception and presence. Such embodiment can increase engagement and trust. This work presents the first exploration into the design of embodied I3Ms and their impact on BLV engagement and trust. In a controlled study with 12 BLV participants, we found that I3Ms using specific embodiment design factors, such as haptic vibratory and embodied personified voices, led to an increased sense of liveliness and embodiment, as well as engagement, but had mixed impact on trust.
5
EchoBreath: Continuous Respiratory Behavior Recognition in the Wild via Acoustic Sensing on Smart Glasses
Kaiyi Guo (shanghai jiao tong university, shanghai, China)Qian Zhang (Shanghai Jiao Tong University, Shanghai, China)Dong Wang (Shanghai Jiao Tong University, Shanghai, China)
Monitoring the occurrence count of abnormal respiratory symptoms helps provide critical support for respiratory health. While this is necessary, there is still a lack of an unobtrusive and reliable way that can be effectively used in real-world settings. In this paper, we present EchoBreath, a passive and active acoustic combined sensing system for abnormal respiratory symptoms monitoring. EchoBreath novelly uses the speaker and microphone under the frame of the glasses to emit ultrasonic waves and capture both passive sounds and echo profiles, which can effectively distinguish between subject-aware behaviors and background noise. Furthermore, A lightweight neural network with the 'Null' class and open-set filtering mechanisms substantially improves real-world applicability by eliminating unrelated activity. Our experiments, involving 25 participants, demonstrate that EchoBreath can recognize 6 typical respiratory symptoms in a laboratory setting with an accuracy of 93.1%. Additionally, an in-the-semi-wild study with 10 participants further validates that EchoBreath can continuously monitor respiratory abnormalities under real-world conditions. We believe that EchoBreath can serve as an unobtrusive and reliable way to monitor abnormal respiratory symptoms.
5
Toward Affective Empathy via Personalized Analogy Generation: A Case Study on Microaggression
Hyojin Ju (POSTECH, Pohang, Korea, Republic of)Jungeun Lee (POSTECH, Pohang, Korea, Republic of)Seungwon Yang (POSTECH, Pohang-si, Korea, Republic of)Jungseul Ok (POSTECH, Pohang, Korea, Republic of)Inseok Hwang (POSTECH, Pohang, Korea, Republic of)
The importance of empathy cannot be overstated in modern societies where people of diverse backgrounds increasingly interact together. The HCI community has strived to foster affective empathy through immersive technologies. Many previous techniques are built upon a premise that presenting the same experience as-is may help evoke the same emotion, which however faces limitations in matters where the emotional responses largely differ across individuals. In this paper, we present a novel concept of generating a personalized experience based on a large language model (LLM) to facilitate affective empathy between individuals despite their differences. As a case study to showcase its effectiveness, we developed EmoSync, an LLM-based agent that generates personalized analogical microaggression situations, facilitating users to personally resonate with a specific microaggression situation of another person. EmoSync is designed and evaluated along a 3-phased user study with 100+ participants. We comprehensively discuss implications, limitations, and possible applications.
5
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation
Jessica He (IBM Research, Yorktown Heights, New York, United States)Stephanie Houde (IBM Research, Cambridge, Massachusetts, United States)Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.
5
"It Brought Me Joy": Opportunities for Spatial Browsing in Desktop Screen Readers
Arnavi Chheda-Kothary (University of Washington, Seattle, Washington, United States)Ather Sharif (University of Washington, Seattle, Washington, United States)David Angel. Rios (Columbia University, New York, New York, United States)Brian A.. Smith (Columbia University, New York, New York, United States)
Blind or low-vision (BLV) screen-reader users have a significantly limited experience interacting with desktop websites compared to non-BLV, i.e., sighted users. This digital divide is exacerbated by the incapability to browse the web spatially—an affordance that leverages spatial reasoning, which sighted users often rely on. In this work, we investigate the value of and opportunities for BLV screen-reader users to browse websites spatially (e.g., understanding page layouts). We additionally explore at-scale website layout understanding as a feature of desktop screen readers. We created a technology probe, WebNExt, to facilitate our investigation. Specifically, we conducted a lab study with eight participants and a five-day field study with four participants to evaluate spatial browsing using WebNExt. Our findings show that participants found spatial browsing intuitive and fulfilling, strengthening their connection to the design of web pages. Furthermore, participants envisioned spatial browsing as a step toward reducing the digital divide.
5
Beyond Vacuuming: How Can We Exploit Domestic Robots’ Idle Time?
Yoshiaki Shiokawa (University of Bath, Bath, United Kingdom)Winnie Chen (University of Bath, Bath, United Kingdom)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Jason Alexander (University of Bath, Bath, United Kingdom)Adwait Sharma (University of Bath, Bath, United Kingdom)
We are increasingly adopting domestic robots (e.g., Roomba) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. They typically possess advanced mobility and sensing capabilities, and therefore have significant potential applications beyond their designed use. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles. We conducted two studies: an online survey (n=50) to understand current usage patterns of these robots within homes and an exploratory study (n=12) with HCI and HRI experts. Our thematic analysis revealed 12 key dimensions for developing interactions with domestic robots and outlined over 100 use cases, illustrating how these robots can offer proactive assistance and provide privacy. Finally, we implemented a proof-of-concept prototype to demonstrate the feasibility of reappropriating domestic robots for diverse ubiquitous computing applications.
5
What Comes After Noticing?: Reflections on Noticing Solar Energy and What Came Next
Angella Mackey (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)David NG. McCallum (Rotterdam University of Applied Science, Rotterdam, Netherlands)Oscar Tomico (Eindhoven University of Technology, Eindhoven, Netherlands)Martijn de Waal (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)
Many design researchers have been exploring what it means to take a more-than-human design approach in their practice. In particular, the technique of “noticing” has been explored as a way of intentionally opening a designer’s awareness to more-than-human worlds. In this paper we present autoethnographic accounts of our own efforts to notice solar energy. Through two studies we reflect on the transformative potential of noticing the more-than-human, and the difficulties in trying to sustain this change in oneself and one’s practice. We propose that noticing can lead to activating exiled capacities within the noticer, relational abilities that lie dormant in each of us. We also propose that emphasising sense-fullness in and through design can be helpful in the face of broader psychological or societal boundaries that block paths towards more relational ways of living with non-humans.
5
Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives
Meredith Ringel. Morris (Google DeepMind, Seattle, Washington, United States)Jed R.. Brubaker (University of Colorado Boulder, Boulder, Colorado, United States)
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services. We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts. We then introduce a novel design space detailing potential implementations of generative ghosts. We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.
4
Everything to Gain: Combining Area Cursors with increased Control-Display Gain for Fast and Accurate Touchless Input
Kieran Waugh (University of Glasgow , Glasgow, Scotland, United Kingdom)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Euan Freeman (University of Glasgow, Glasgow, United Kingdom)
Touchless displays often use mid-air gestures to control on-screen cursors for pointer interactions. Area cursors can simplify touchless cursor input by implicitly targeting nearby widgets without the cursor entering the target. However, for displays with dense target layouts, the cursor still has to arrive close to the widget, meaning the benefits of area cursors for time-to-target and effort are diminished. Through two experiments, we demonstrate for the first time that fine-tuning the mapping between hand and cursor movements (control-display gain -- CDG) can address the deficiencies of area cursors and improve the performance of touchless interaction. Across several display sizes and target densities (representative of myriad public displays used in retail, transport, museums, etc), our findings show that the forgiving nature of an area cursor compensates for the imprecision of a high CDG, helping users interact more effectively with smaller and more controlled hand/arm movements.
4
Customizing Emotional Support: How Do Individuals Construct and Interact With LLM-Powered Chatbots
Xi Zheng (City University of Hong Kong, Hong Kong, China)Zhuoyang LI (City University of Hong Kong, Hong Kong, China)Xinning Gui (The Pennsylvania State University, University Park, Pennsylvania, United States)Yuhan Luo (City University of Hong Kong, Hong Kong, China)
Personalized support is essential to fulfill individuals’ emotional needs and sustain their mental well-being. Large language models (LLMs), with great customization flexibility, hold promises to enable individuals to create their own emotional support agents. In this work, we developed ChatLab, where users could construct LLM-powered chatbots with additional interaction features including voices and avatars. Using a Research through Design approach, we conducted a week-long field study followed by interviews and design activities (N = 22), which uncovered how participants created diverse chatbot personas for emotional reliance, confronting stressors, connecting to intellectual discourse, reflecting mirrored selves, etc. We found that participants actively enriched the personas they constructed, shaping the dynamics between themselves and the chatbot to foster open and honest conversations. They also suggested other customizable features, such as integrating online activities and adjustable memory settings. Based on these findings, we discuss opportunities for enhancing personalized emotional support through emerging AI technologies.
4
SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization
Artem Dementyev (Google Inc., Mountain View, California, United States)Dimitri Kanevsky (Google, Mountain View, California, United States)Samuel Yang (Google, Mountain View, California, United States)Mathieu Parvaix (Google Research, Mountain View, California, United States)Chiong Lai (Google, Mountain View, California, United States)Alex Olwal (Google Inc., Mountain View, California, United States)
Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.
4
Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
Martin Feick (DFKI and Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Xuxin Tang (Computer Science Department, Blacksburg, Virginia, United States)Raul Garcia-Martin (Universidad Carlos III de Madrid, Leganes, Madrid, Spain)Alexandru Luchianov (MIT CSAIL, Cambridge, Massachusetts, United States)Roderick Wei Xiao. Huang (MIT CSAIL, Cambridge, Massachusetts, United States)Chang Xiao (Adobe Research, San Jose, California, United States)Alexa Siu (Adobe Research, San Jose, California, United States)Mustafa Doga Dogan (Adobe Research, Basel, Switzerland)
Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.
4
Towards Understanding Interactive Sonic Gastronomy with Chefs and Diners
Hongyue Wang (Monash University, Melbourne, Australia)Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Linjia He (Monash University, Melbourne, Australia)Nathalie Overdevest (Monash University, Clayton, VIC, Australia)Ryan Wee (Monash University, Melbourne, Victoria, Australia)Yan Wang (Monash University, Melbourne, Australia)Phoebe O.. Toups Dugas (Monash University, Melbourne, Australia)Don Samitha Elvitigala (Monash University, Melbourne, Australia)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
With advancements in interactive technologies, research in human-food interaction (HFI) has begun to employ interactive sound to enrich the dining experience. However, chefs' creative use of this sonic interactivity as a new "ingredient" in their culinary practices remains underexplored. In response, we conducted an empirical study with six pairs of chefs and diners utilizing SoniCream, an ice cream cone that plays digital sounds while consuming. Through exploration, creation, collaboration, and reflection, we identified four themes concerning culinary creativity, dining experience, interactive sonic gastronomy deployment, and chef-diner interplay. Building on the discussions at the intersection of these themes, we derived four design implications for creating interactive systems that could support chefs' culinary creativity, thereby enriching dining experiences. Ultimately, our work aims to help interaction designers fully incorporate chefs' perspectives into HFI research.
4
Understanding and Supporting Peer Review Using AI-reframed Positive Summary
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Alarith Uhde (The University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)
While peer review enhances writing and research quality, harsh feedback can frustrate and demotivate authors. Hence, it is essential to explore how critiques should be delivered to motivate authors and enable them to keep iterating their work. In this study, we explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task, alongside varying levels of overall evaluations (high vs. low), on authors’ feedback reception, revision outcomes, and motivation to revise. Through a 2x2 online experiment with 137 participants, we found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors’ critique acceptance, whereas low overall evaluations of their work led to increased revision efforts. We discuss the implications of using AI in peer feedback, focusing on how AI-driven critiques can influence critique acceptance and support research communities in fostering productive and friendly peer feedback practices.
4
Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Jiaji Li (MIT, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Yujia Liu (Tsinghua University, Beijing, China)Emily Guan (Pratt Institute, Brooklyn, New York, United States)Guanyun Wang (Zhejiang University, Hangzhou, China)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
4
InsightBridge: Enhancing Empathizing with Users through Real-Time Information Synthesis and Visual Communication
Junze Li (The Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Yue Zhang (Shenzhen University, Shenzhen, China)Chengbo Zheng (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Dingdong Liu (The Hong Kong University of Science and Technology, Hong Kong , China)Zeyu Huang (The Hong Kong University of Science and Technology, New Territories, Hong Kong)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)
User-centered design necessitates researchers deeply understanding target users throughout the design process. However, during early-stage user interviews, researchers may misinterpret users due to time constraints, incorrect assumptions, and communication barriers. To address this challenge, we introduce InsightBridge, a tool that supports real-time, AI-assisted information synthesis and visual-based verification. InsightBridge automatically organizes relevant information from ongoing interview conversations into an empathy map. It further allows researchers to specify elements to generate visual abstracts depicting the selected information, and then review these visuals with users to refine the visuals as needed. We evaluated the effectiveness of InsightBridge through a within-subject study (N=32) from both the researchers’ and users’ perspectives. Our findings indicate that InsightBridge can assist researchers in note-taking and organization, as well as in-time visual checking, thereby enhancing mutual understanding with users. Additionally, users’ discussions of visuals prompt them to recall overlooked details and scenarios, leading to more insightful ideas.
4
How Can Interactive Technology Help Us to Experience Joy With(in) the Forest? Towards a Taxonomy of Tech for Joyful Human-Forest Interactions
Ferran Altarriba Bertran (Tampere University, Tampere, Finland)Oğuz 'Oz' Buruk (Tampere University, Tampere, Finland)Jordi Márquez Puig (Universitat de Girona, Salt, Girona, Spain)Juho Hamari (Tampere University, Tampere, Finland)
This paper presents intermediate-level knowledge in the form of a taxonomy that highlights 12 different ways in which interactive tech might support forest-related experiences that are joyful for humans. It can inspire and provide direction for designs that aim to enrich the experiential texture of forests. The taxonomy stemmed from a reflexive analysis of 104 speculative ideas produced during a year-long co-design process, where we co-experienced and creatively engaged a diverse range forests and forest-related activities with 250+ forest-goers with varied backgrounds and sensitivities. Given that breadth of forests and populations involved, our work foregrounds a rich set of design directions that set an actionable early frame for creating tech that supports joyful human-forest interplays – one that we hope will be extended and consolidated in future research, ours and others'.
4
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot Interaction
Jan Leusmann (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Thomas Liang (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Chao Wang (Honda Research Institute Europe, Offenbach/Main, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)
Understanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.
4
Exploring Mobile Touch Interaction with Large Language Models
Tim Zindulka (University of Bayreuth, Bayreuth, Germany)Jannek Maximilian. Sekowski (University of Bayreuth, Bayreuth, Germany)Florian Lehmann (University of Bayreuth, Bayreuth, Germany)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)
Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface. In this paper, we propose to control the LLM via touch gestures performed directly on the text. We first chart a design space that covers fundamental touch input and text transformations. In this space, we then concretely explore two control mappings: spread-to-generate and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a user study (N=14) that compares three feedback designs: no visualisation, text length indicator, and length + word indicator. The results demonstrate that touch-based control of LLMs is both feasible and user-friendly, with the length + word indicator proving most effective for managing text generation. This work lays the foundation for further research into gesture-based interaction with LLMs on touch devices.
4
Ego vs. Exo and Active vs. Passive: Investigating the Individual and Combined Effects of Viewpoint and Navigation on Spatial Immersion and Understanding in Immersive Storytelling
Tao Lu (Georgia Institute of Technology, Atlanta, Georgia, United States)Qian Zhu (The Hong Kong University of Science and Technology, Hong Kong, China)Tiffany S. Ma (Georgia Institute of Technology, Atlanta, Georgia, United States)Wong Kam-Kwai (The Hong Kong University of Science and Technology, Hong Kong, China)Anlan Xie (Georgia Institute of Technology, Atlanta, Georgia, United States)Alex Endert (Georgia Institute of Technology, Atlanta, Georgia, United States)Yalong Yang (Georgia Institute of Technology, Atlanta, Georgia, United States)
Visual storytelling combines visuals and narratives to communicate important insights. While web-based visual storytelling is well-established, leveraging the next generation of digital technologies for visual storytelling, specifically immersive technologies, remains underexplored. We investigated the impact of the story viewpoint (from the audience's perspective) and navigation (when progressing through the story) on spatial immersion and understanding. First, we collected web-based 3D stories and elicited design considerations from three VR developers. We then adapted four selected web-based stories to an immersive format. Finally, we conducted a user study (N=24) to examine egocentric and exocentric viewpoints, active and passive navigation, and the combinations they form. Our results indicated significantly higher preferences for egocentric+active (higher agency and engagement) and exocentric+passive (higher focus on content). We also found a marginal significance of viewpoints on story understanding and a strong significance of navigation on spatial immersion.
4
SqueezeMe: Creating Soft Inductive Pressure Sensors with Ferromagnetic Elastomers
Thomas Preindl (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Andreas Pointner (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Nimal Jagadeesh Kumar (University of Sussex, Brighton, United Kingdom)Nitzan Cohen (Free University of Bozen-Bolzano, Bolzano, Italy)Niko Münzenrieder (Free University of Bozen Bolzano, Bozen-Bolzano, Italy)Michael Haller (Free University of Bozen-Bolzano, Bolzano, Italy)
We introduce SqueezeMe, a soft and flexible inductive pressure sensor with high sensitivity made from ferromagnetic elastomers for wearable and embedded applications. Constructed with silicone polymers and ferromagnetic particles, this biocompatible sensor responds to pressure and deformation by varying inductance through ferromagnetic particle density changes, enabling precise measurements. We detail the fabrication process and demonstrate how silicones with varying Shore hardness and different ferromagnetic fillers affect the sensor's sensitivity. Applications like weight, air pressure, and pulse measurements showcase the sensor’s versatility for integration into soft robotics and flexible electronics.
4
Attracting Fingers with Waves: Potential Fields Using Active Lateral Forces Enhance Touch Interactions
Zhaochong Cai (Delft University of Technology, Delft, Netherlands)David Abbink (Delft University of Technology, Delft, Netherlands)Michael Wiertlewski (Delft University of Technology, Delft, Netherlands)
Touchscreens and touchpads offer intuitive interfaces but provide limited tactile feedback, usually just mechanical vibrations. These devices lack continuous feedback to guide users’ fingers toward specific directions. Recent innovations in surface haptic devices, however, leverage ultrasonic traveling waves to create active lateral forces on a bare fingertip. This paper \revised{investigates the effects and design possibilities of active forces feedback in touch interactions by rendering artificial potential fields on a touchpad.Three user studies revealed that: (1) users perceived attractive and repulsive fields as bumps and holes with similar detection thresholds; (2) step-wise force fields improved targeting by 22.9% compared to friction-only methods; and (3) active force fields effectively communicated directional cues to the users. Several applications were tested, with user feedback favoring this approach for its enhanced tactile experience, added enjoyment, realism, and ease of use.
4
FlexiVol: a Volumetric Display with an Elastic Diffuser to Enable Reach-Through Interaction
Elodie Bouzbib (Universidad Publica de Navarra, Pamplona, Spain)Iosune Sarasate Azcona (Universidad Pública de Navarra, Pamplona, Spain)Unai Javier Fernández (Universidad Pública de Navarra, Pamplona, Spain)Ivan Fernández (Universidad Pública de Navarra, Pamplona, Navarra, Spain)Manuel Lopez-Amo (Universidad Pública de Navarra, Pamplona, Spain)Iñigo Ezcurdia (Public University of Navarra, Pamplona, Spain)Asier Marzo (Universidad Publica de Navarra, Pamplona, Navarre, Spain)
Volumetric displays render true 3D graphics without forcing users to wear headsets or glasses. However, the optical diffusers that volumetric displays employ are rigid and thus do not allow for direct interaction. FlexiVol employs elastic diffusers to allow users to reach inside the display volume to have direct interaction with true 3D content. We explored various diffuser materials in terms of visual and mechanical properties. We correct the distortions of the volumetric graphics projected on elastic oscillating diffusers and propose a design space for FlexiVol, enabling various gestures and actions through direct interaction techniques. A user study suggests that selection, docking and tracing tasks can be performed faster and more precisely using direct interaction when compared to indirect interaction with a 3D mouse. Finally, applications such as a virtual pet or landscape edition highlight the advantages of a volumetric display that supports direct interaction.
4
BIT: Battery-free, IC-less and Wireless Smart Textile Interface and Sensing System
Weiye Xu (Tsinghua University, Beijing, China)Tony Li (Stony Brook University, Stony Brook, New York, United States)Yuntao Wang (Tsinghua University, Beijing, China)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
The development of smart textile interfaces is hindered by the inclusion of rigid hardware components and batteries within the fabric, which pose challenges in terms of manufacturability, usability, and environmental concerns related to electronic waste. To mitigate these issues, we propose a smart textile interface and its wireless sensing system to eliminate the need for ICs, batteries, and connectors embedded into textiles. Our technique is established on the integration of multi-resonant circuits in smart textile interfaces, and utilizing near-field electromagnetic coupling between two coils to facilitate wireless power transfer and data acquisition from smart textile interface.A key aspect of our system is the development of a mathematical model that accurately represents the equivalent circuit of the sensing system. Using this model, we developed a novel algorithm to accurately estimate sensor signals based on changes in system impedance. Through simulation-based experiments and a user study, we demonstrate that our technique effectively supports multiple textile sensors of various types.
4
Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in Videoconferencing
Carlota Vazquez Gonzalez (King's College London, London, United Kingdom)Timothy Neate (King's College London, London, United Kingdom)Rita Borgo (Kings College London, London, England, United Kingdom)
Videoconferencing is integral to modern work and living. Recently, technologists have sought to leverage data captured -- e.g. from cameras and microphones -- to augment communication. This might mean capturing communication information about verbal (e.g. speech, chat messages), or non-verbal exchanges (e.g. body language, gestures, tone of voice) and using this to mediate -- and potentially improve -- communication. However, such tracking has implications for user experience and raises wider concerns (e.g. privacy). To design tools which account for user needs and preferences, this study investigates perspectives on communication tracking through a global survey and interviews, exploring how daily behaviours and the impact of specific features influence user perspectives. We examine user preferences on non-verbal communication tracking, preferred methods of how this information is conveyed and to whom this should be communicated. Our findings aim to guide the development of non-verbal communication tools which augment videoconferencing that prioritise user needs.
3
ViFeed: Promoting Slow Eating and Food Awareness through Strategic Video Manipulation during Screen-Based Dining
Yang Chen (National University of Singapore, Singapore, Singapore)Felicia Fang-Yi Tan (National University of Singapore, Singapore, Singapore)Zhuoyu Wang (National University of Singapore, Singapore, Singapore)Xing Liu (Hangzhou Holographic Intelligence Institute, Hangzhou, China)Jiayi Zhang (National University of Singapore, Singapore, Singapore)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Shengdong Zhao (City University of Hong Kong, Hong Kong, China)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
Given the widespread presence of screens during meals, the notion that digital engagement is inherently incompatible with mindfulness. We demonstrate how the strategic design of digital content can enhance two core aspects of mindful eating: slow eating and food awareness. Our research unfolded in three sequential studies: (1). Zoom Eating Study: Contrary to the assumption that video-watching leads to distraction and overeating, this study revealed that subtle video speed manipulations—can promote slower eating (by 15.31%) and controlled food intake (by 9.65%) while maintaining meal satiation and satisfaction. (2). Co-design workshop: Informed the development of ViFeed, a video playback system strategically incorporating subtle speed adjustments and glanceable visual cues. (3). Field Study: A week-long deployment of ViFeed in daily eating demonstrated its efficacy in fostering food awareness, food appreciation, and sustained engagement. By bridging the gap between ideal mindfulness practices and screen-based behaviors, this work offers insights for designing digital-wellbeing interventions that align with, rather than against, existing habits.
3
"A Tool for Freedom": Co-Designing Mobility Aid Improvements Using Personal Fabrication and Physical Interface Modules with Primarily Young Adults
Jerry Cao (University of Washington, Seattle, Washington, United States)Krish Jain (University of Washington, Seattle, Washington, United States)Julie Zhang (University of Washington, Seattle, Washington, United States)Yuecheng Peng (University of Washington, Seattle, Washington, United States)Shwetak Patel (University of Washington, Seattle, Washington, United States)Jennifer Mankoff (University of Washington, Seattle, Washington, United States)
Mobility aids (e.g., canes, crutches, and wheelchairs) are crucial for people with mobility disabilities; however, pervasive dissatisfaction with these aids keeps usage rates low. Through semi-structured interviews with 17 mobility aid users, mostly under the age of 30, we identified specific sources of dissatisfaction among younger users of mobility aids, uncovered community-based solutions for these dissatisfactions, and explored ways these younger users wanted to improve mobility aids. We found that users sought customizable, reconfigurable, multifunctional, and more aesthetically pleasing mobility aids. Participants' feedback guided our prototyping of tools/accessories, such as laser cut decorative skins, hot-swappable physical interface modules, and modular canes with custom 3D-printed handles. These prototypes were then the focus of additional co-design sessions where six returning participants offered suggestions for improvements and provided feedback on their usefulness and usability. Our findings highlight that many mobility aid users have the desire, ability, and need to customize and improve their aids in different ways compared to older adults. We propose various solutions and design guidelines to facilitate the modifications of mobility aids.
3
From Alien to Ally: Exploring Non-Verbal Communication with Non-Anthropomorphic Avatars in a Collaborative Escape-Room
Federico Espositi (Politecnico di Milano, Milan, -Select-, Italy)Maurizio Vetere (Politecnico di Milano, Milan, Italy)Andrea Bonarini (Politecnico di Milano, Milan, Italy)
Despite the spread of technologies in the physical world and the normalization of virtual experiences, non-verbal communication with radically non-anthropomorphic avatars remains an underexplored frontier. We present an interaction system in which two participants must learn to communicate with each other non-verbally through a digital filter that morphs their appearance. In a collaborative escape room, the Visitor must teach a non-anthropomorphic physical robot to play, while the Controller, in a different location, embodies the robot with an altered perception of the environment and the Visitor’s companion in VR. This study addresses the design of the activity, the robot, and the virtual environment, with a focus on how the Visitor’s morphology is translated in VR. Results show that participants were able to develop emergent and effective communication strategies, with the Controller naturally embodying its avatar’s narrative, making this system a promising testbed for future research on human-technology interaction, entertainment, and embodiment.
3
corobos: A Design for Mobile Robots Enabling Cooperative Transitions between Table and Wall Surfaces
Changyo Han (The University of Tokyo, Tokyo, Japan)Yosuke Nakagawa (The University of Tokyo, Tokyo, Japan)Takeshi Naemura (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)
Swarm User Interfaces allow dynamic arrangement of user environments through the use of multiple mobile robots, but their operational range is typically confined to a single plane due to constraints imposed by their two-wheel propulsion systems. We present corobos, a proof-of-concept design that enables these robots to cooperatively transition between table (horizontal) and wall (vertical) surfaces seamlessly, without human intervention. Each robot is equipped with a uniquely designed slope structure that facilitates smooth rotation when another robot pushes it toward a target surface. Notably, this design relies solely on passive mechanical elements, eliminating the need for additional active electrical components. We investigated the design parameters of this structure and evaluated its transition success rate through experiments. Furthermore, we demonstrate various application examples to showcase the potential of corobos in enhancing user environments.
3
NightLight: Passively Mapping Nighttime Sidewalk Light Data for Improved Pedestrian Routing
Joseph Breda (University of Washington, Seattle, Washington, United States)Daniel Campos Zamora (University of Washington, Seattle, Washington, United States)Shwetak Patel (University of Washington, Seattle, Washington, United States)Jon E.. Froehlich (University of Washington, Seattle, Washington, United States)
Nighttime sidewalk illumination has a significant and unequal influence on where and whether pedestrians walk at night. Despite the importance of pedestrian lighting, there is currently no approach for measuring and communicating how humans experience nighttime sidewalk light levels at scale. We introduce NightLight, a new sensing approach that leverages the ubiquity of smartphones by re-appropriating the built-in light sensor ---traditionally used to adapt screen brightness---to sense pedestrian nighttime lighting conditions. We validated our technique through in-lab and street-based evaluations characterizing performance across phone orientation, phone model, and varying light levels demonstrating the ability to aggregate and map pedestrian-oriented light levels with unaltered smartphones. Additionally, to examine the impact of light level data on pedestrian route choice, we conducted a qualitative user study with 13 participants using a standard map vs. one with pedestrian lighting data from NightLight Our findings demonstrate that people changed their routes in preference of well-light routes during nighttime walking. Our work has implications for expanding personalized navigation and pedestrian route choice and passive urban sensing.
3
FingerGlass: Enhancing Smart Glasses Interaction via Fingerprint Sensing
Zhanwei Xu (Tsinghua University, Beijing, China)Haoxiang Pei (Tsinghua University, Beijing, China)Jianjiang Feng (Tsinghua University, Beijing, China)Jie Zhou (Department of Automation, BNRist, Tsinghua University, Beijing, China)
Smart glasses hold immense potential, but existing input methods often hinder their seamless integration into everyday life. Touchpads integrated into the smart glasses suffer from limited input space and precision; voice commands raise privacy concerns and are contextually constrained; vision-based or IMU-based gesture recognition faces challenges in computational cost or privacy concerns. We present FingerGlass, an interaction technique for smart glasses that leverages side-mounted fingerprint sensors to capture fingerprint images. With a combined CNN and LSTM network, FingerGlass identifies finger identity and recognizes four types of gestures (nine in total): sliding, rolling, rotating, and tapping. These gestures, coupled with finger identification, are mapped to common smart glasses commands, enabling comprehensive and fluid text entry and application control. A user study reveals that FingerGlass represents a promising step towards a fresh, discreet, ergonomic, and efficient input interaction with smart glasses, potentially contributing to their wider adoption and integration into daily life.
3
Peek into the `White-Box': A Field Study on Bystander Engagement with Urban Robot Uncertainty
Xinyan Yu (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Marius Hoggenmüller (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Tram Thi Minh. Tran (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Yiyuan Wang (The University of Sydney, Sydney, Australia)Qiuming Zhang (The University of Sydney, Sydney, NSW, Australia)Martin Tomitsch (University of Technology Sydney, Sydney, NSW, Australia)
Uncertainty inherently exists in the autonomous decision-making process of robots. Involving humans in resolving this uncertainty not only helps robots mitigate it but is also crucial for improving human-robot interactions. However, in public urban spaces filled with unpredictability, robots often face heightened uncertainty without direct human collaborators. This study investigates how robots can engage bystanders for assistance in public spaces when encountering uncertainty and examines how these interactions impact bystanders' perceptions and attitudes towards robots. We designed and tested a speculative `peephole' concept that engages bystanders in resolving urban robot uncertainty. Our design is guided by considerations of non-intrusiveness and eliciting initiative in an implicit manner, considering bystanders' unique role as non-obligated participants in relation to urban robots. Drawing from field study findings, we highlight the potential of involving bystanders to mitigate urban robots' technological imperfections to both address operational challenges and foster public acceptance of urban robots. Furthermore, we offer design implications to encourage bystanders' involvement in mitigating the imperfections.
3
TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction
Guanyun Wang (Zhejiang University, Hangzhou, China)Chuang Chen (Zhejiang University, HangZhou, China)Xiao Jin (Imperial College London, London, United Kingdom)Yulu Chen (University College London, London, United Kingdom)Yangweizhe Zheng (Northeast Forestry University, Harbin, China)Qianzi Zhen (Zhejiang University, HangZhou, China)Yang Zhang (Imperial College London, London, United Kingdom)Jiaji Li (MIT, Cambridge, Massachusetts, United States)Yue Yang (Zhejiang University, Hangzhou, China)Ye Tao (Hangzhou City University, Hangzhou, China)Shijian Luo (Zhejiang University, Hangzhou, Zhejiang, China)Lingyun Sun (Zhejiang University, Hangzhou, China)
Wood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.
3
TactStyle: Generating Tactile Textures with Generative AI for Digital Fabrication
Faraz Faruqi (MIT CSAIL, Cambridge, Massachusetts, United States)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Jaskaran Singh. Walia (Vellore Institute of Technology, Chennai, India)Yunyi Zhu (MIT CSAIL, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Donald Degraen (University of Canterbury, Christchurch, New Zealand)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
Recent work in Generative AI enables the stylization of 3D models based on image prompts. However, these methods do not incorporate tactile information, leading to designs that lack the expected tactile properties. We present TactStyle, a system that allows creators to stylize 3D models with images while incorporating the expected tactile properties. TactStyle accomplishes this using a modified image-generation model fine-tuned to generate heightfields for given surface textures. By optimizing 3D model surfaces to embody a generated texture, TactStyle creates models that match the desired style and replicate the tactile experience. We utilize a large-scale dataset of textures to train our texture generation model. In a psychophysical experiment, we evaluate the tactile qualities of a set of 3D-printed original textures and TactStyle's generated textures. Our results show that TactStyle successfully generates a wide range of tactile features from a single image input, enabling a novel approach to haptic design.
3
Curves Ahead: Enhancing the Steering Law for Complex Curved Trajectories
Jennie J.Y.. Chen (University of British Columbia, Vancouver, British Columbia, Canada)Sidney S. Fels (University of British Columbia, Vancouver, British Columbia, Canada)
The Steering Law has long been a fundamental model in predicting movement time for tasks involving navigating through constrained paths, such as in selecting sub-menu options, particularly for straight and circular arc trajectories. However, this does not reflect the complexities of real-world tasks where curvatures can vary arbitrarily, limiting its applications. This study aims to address this gap by introducing the total curvature parameter K into the equation to account for the overall curviness characteristic of a path. To validate this extension, we conducted a mouse-steering experiment on fixed-width paths with varying lengths and curviness levels. Our results demonstrate that the introduction of K significantly improves model fitness for movement time prediction over traditional models. These findings advance our understanding of movement in complex environments and support potential applications in fields like speech motor control and virtual navigation.
3
Encounter with the Giants: Understanding Interaction with Large-scale Inflatable Soft Robots
Bijetri Biswas Biswas (University of Bristol, Bristol, United Kingdom)Emma Powell (Airgiants,Bristol, Bristol, United Kingdom)Robert Nixdorf (Airgiants,Bristol, Bristol, United Kingdom)Richard Sewell (Airgiants,Bristol, Bristol, United Kingdom)Anne Roudaut (University of Bristol, Bristol, United Kingdom)
Soft robots, constructed from compliant materials, offer unique flexibility and adaptability. However, most research has focused on small-scale interactions, leaving the potential of large-scale soft robots largely unexplored. This research explores how humans engage with inflatable soft robots that are large in size and created for fun and artistic expression. We conducted 22 hours of video analysis (N=30) and thematic interviews (N=20) to understand user engagement and explore their motivations. Our findings revealed a range of interactions, from delicate touches to immersive full-body engagement, driven by trust, safety, and emotional connection. Participants frequently compared the robots to peaceful creatures like plants and sea animals, fostering playful and therapeutic interactions. These insights highlight the potential of giant soft robots in enhancing emotional well-being, therapeutic applications, and immersive experiences. This paper aims to inspire future designs that leverage the unique attributes of large-scale soft robots for trust-centered, interactive human-robot relationships.
3
Since U Been Gone: Augmenting Context-Aware Transcriptions for Re-Engaging in Immersive VR Meetings
Geonsun Lee (University of Maryland, College Park, Maryland, United States)Yue Yang (Stanford University, Stanford, California, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Dinesh Manocha (University of Maryland , College Park, Maryland, United States)
Maintaining engagement in immersive meetings is challenging, particularly when users must catch up on missed content after disruptions. While transcription interfaces can help, table-fixed panels have the potential to distract users from the group, diminishing social presence, while avatar-fixed captions fail to provide past context. We present EngageSync, a context-aware avatar-fixed transcription interface that adapts based on user engagement, offering live transcriptions and LLM-generated summaries to enhance catching up while preserving social presence. We implemented a live VR meeting setup for a 12-participant formative study and elicited design considerations. In two user studies with small (3 avatars) and mid-sized (7 avatars) groups, EngageSync significantly improved social presence (𝑝 < .05) and time spent gazing at others in the group instead of the interface over table-fixed panels. Also, it reduced re-engagement time and increased information recall (𝑝 < .05) over avatar-fixed interfaces, with stronger effects in mid-sized groups (𝑝 < .01).
3
FlexEar-Tips: Shape-Adjustable Ear Tips Using Pressure Control
Takashi Amesaka (Keio University, Yokohama, Japan)Takumi Yamamoto (Keio University, Yokohama, Japan)Hiroki Watanabe (Future University Hakodate, Hakodate, Japan)Buntarou Shizuki (University of Tsukuba, Tsukuba, Ibaraki, Japan)Yuta Sugiura (Keio University, Yokohama, Japan)
We introduce FlexEar-Tips, a dynamic ear tip system designed for the next-generation hearables. The ear tips are controlled by an air pump and solenoid valves, enabling size adjustments for comfort and functionality. FlexEar-Tips includes an air pressure sensor to monitor ear tip size, allowing it to adapt to environmental conditions and user needs. In the evaluation, we conducted a preliminary investigation of the size control accuracy and the minimum amount of variability of haptic perception in the user's ear. We then evaluated the user's ability to identify patterns in the haptic notification system, the impact on the music listening experience, the relationship between the size of the ear tips and the sound localization ability, and the impact on the reduction of humidity in the ear using a model. We proposed new interaction modalities for adaptive hearables and discussed health monitoring, immersive auditory experiences, haptics notifications, biofeedback, and sensing.
3
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems
Vikram Mohanty (Bosch Research North America, Sunnyvale, California, United States)Jude Lim (Independent Researcher, Arlington, Virginia, United States)Kurt Luther (Virginia Tech, Arlington, Virginia, United States)
AI models are constantly evolving, with new versions released frequently. Human-AI interaction guidelines encourage notifying users about changes in model capabilities, ideally supported by thorough benchmarking. However, as AI systems integrate into domain-specific workflows, exhaustive benchmarking can become impractical, often resulting in silent or minimally communicated updates. This raises critical questions: Can users notice these updates? What cues do they rely on to distinguish between models? How do such changes affect their behavior and task performance? We address these questions through two studies in the context of facial recognition for historical photo identification: an online experiment examining users’ ability to detect model updates, followed by a diary study exploring perceptions in a real-world deployment. Our findings highlight challenges in noticing AI model updates, their impact on downstream user behavior and performance, and how they lead users to develop divergent folk theories. Drawing on these insights, we discuss strategies for effectively communicating model updates in AI-infused systems.
3
Being in Virtual Worlds: How Interaction, Environment, and Touch Shape Embodiment in Immersive Experiences
John Desnoyers-Stewart (Simon Fraser University, Vancouver, British Columbia, Canada)Alissa N.. Antle (Simon Fraser University, Vancouver, British Columbia, Canada)Bernhard E.. Riecke (Simon Fraser University, Vancouver, British Columbia, Canada)
Embodiment is an everyday experience that typically goes unnoticed. While we often take it for granted, with the adoption of virtual reality (VR) technology, embodiment in virtual bodies and worlds has become an important consideration for designers of immersive experiences. To date, the VR design community has primarily considered embodiment in terms of body ownership over a synchronized visual representation. In this paper, we construct an interactional framework of virtual embodiment, beginning by revisiting what it really means to be “embodied.” Our framework reconnects embodiment and presence in virtual environments founded in Dourish's concept of embodied interaction and Heidegger's Dasein or “being-in-the-world.” We discuss how embodiment, fundamentally rooted in past and present interactions, changes our understanding of body ownership and its extension into VR. Integrating theories from VR research, philosophy, HCI, and psychology we uncover the complex interplay of interaction, environment, and touch in shaping embodied experiences. We present a novel framework for understanding embodiment in VR rooted in interaction, enabling designers to create more immersive and meaningful virtual worlds.
3
User-defined Co-speech Gesture Design with Swarm Robots
Minh Duc Dang (Simon Fraser University, Burnaby, British Columbia, Canada)Samira Pulatova (Simon Fraser University , Burnaby, British Columbia, Canada)Lawrence H. Kim (Simon Fraser University, Burnaby, British Columbia, Canada)
Non-verbal signals, including co-speech gestures, play a vital role in human communication by conveying nuanced meanings beyond verbal discourse. While researchers have explored co-speech gestures in human-like conversational agents, limited attention has been given to non-humanoid alternatives. In this paper, we propose using swarm robotic systems as conversational agents and introduce a foundational set of swarm-based co-speech gestures, elicited from non-technical users and validated through an online study. This work outlines the key software and hardware requirements to advance research in co-speech gesture generation with swarm robots, contributing to the future development of social robotics and conversational agents.
3
PCB Renewal: Iterative Reuse of PCB Substrates for Sustainable Electronic Making
Zeyu Yan (University Of Maryland, College Park, Maryland, United States)Advait Vartak (University of Maryland, College Park, Maryland, United States)Jiasheng Li (University of Maryland, College Park, Maryland, United States)Zining Zhang (University of Maryland, College Park, Maryland, United States)Huaishu Peng (University of Maryland, College Park, Maryland, United States)
PCB (printed circuit board) substrates are often single-use, leading to material waste in electronics making. We introduce PCB Renewal , a novel technique that "erases" and "reconfigures" PCB traces by selectively depositing conductive epoxy onto outdated areas, transforming isolated paths into conductive planes that support new traces. We present the PCB Renewal workflow, evaluate its electrical performance and mechanical durability, and model its sustainability impact, including material usage, cost, energy consumption, and time savings. We develop a software plug-in that guides epoxy deposition, generates updated PCB profiles, and calculates resource usage. To demonstrate PCB Renewal’s effectiveness and versatility, we repurpose a single PCB across four design iterations spanning three projects: a camera roller, a WiFi radio, and an ESPboy game console. We also show how an outsourced double-layer PCB can be reconfigured, transforming it from an LED watch to an interactive cat toy. The paper concludes with limitations and future directions.
3
Into the Unknown: Leveraging Conversational AI in Supporting Young Migrants' Journeys Towards Cultural Adaptation
Sunok Lee (Aalto University, Espoo, Finland)Dasom Choi (KAIST, Dajeon, Korea, Republic of)Lucy Truong (Aalto University, Espoo, Finland)Nitin Sawhney (Aalto University, Espoo, Finland)Henna Paakki (Aalto University, Espoo, Finland)
Accelerated globalization has made migration commonplace, creating significant cultural adaptation challenges, particularly for young migrants. While HCI research has explored the role of technology in migrants' cultural adaptation, there is a need to address the diverse cultural backgrounds and needs of young migrants specifically. Recognizing the potential of conversational AI to adapt to diverse cultural contexts, we investigate how young migrants could use this technology in their adaptation journey and explore its societal implementation. Through individual workshops with young migrants and stakeholder interviews—including AI practitioners, public sector workers, policy experts, and social scientist—we found that both groups of participants expect conversational AI to support young migrants in connecting with the host culture before migration, exploring the home culture, and aligning identities across home and host cultures. However, challenges such as expectation gaps and cultural bias may hinder cultural adaptation. We discuss design considerations for culturally sensitive AI that empower young migrants and propose strategies to enhance societal readiness for AI-driven cultural adaptation.
3
Slip Casting as a Machine for Making Textured Ceramic Interfaces
Bo Han (National University of Singapore, Singapore, Singapore)Jared Lim (National University of Singapore, Singapore, Singapore)Kianne Lim (National University of Singapore, Singapore, Singapore)Adam Choo (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Genevieve Ang (Independent Artist, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Ceramics provide a rich domain for exploring craft, fabrication, and diverse material textures that enhance tangible interaction. In this work, we explored slip-casting, a traditional ceramic technique where liquid clay is poured into a porous plaster mold that absorbs water from the slip to form a clay body. We adapted this process into an approach we called Resist Slip-Casting. By selectively masking the mold’s surface with stickers to vary its water absorption rate, our approach enables makers to create ceramic objects with intricate textured surfaces, while also allowing the customization of a single mold for different outcomes. In this paper, we detail the resist slip-casting process and demonstrate its application by crafting a range of tangible interfaces with customizable visual symbols, tactile features, and decorative elements. We further discuss our approach within the broader conversation in HCI on fabrication machines that promote creative collaboration between humans, materials, and tools.
3
Creating Furniture-Scale Deployable Objects with a Computer-Controlled Sewing Machine
Sapna Tayal (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lea Albaugh (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)James McCann (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Scott E. Hudson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We introduce a novel method for fabricating functional flat-to-shape objects using a large computer-controlled sewing machine (11 ft / 3.4m wide), a process that is both rapid and scalable beyond the machine's sewable area. Flat-to-shape deployable objects can allow for quick and easy need-based activation, but the selective flexibility required can involve complex fabrication or tedious assembly. In our method, we sandwich rigid form-defining materials, such as plywood and acrylic, between layers of fabric. The sewing process secures these layers together, creating soft hinges between the rigid inserts which allow the object to transition smoothly into its three-dimensional functional form with little post-processing.
3
Sonic Delights: Exploring the Design of Food as An Auditory-Gustatory Interface
Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Yinyi Li (Monash University, Melbourne, Victoria, Australia)Hongyue Wang (Monash University, Melbourne, Victoria, Australia)Ziqi Fang (Imperial College London, London, United Kingdom)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
While interest in blending sound with culinary experiences has grown in Human-Food Interaction (HFI), the significance of food’s material properties in shaping sound-related interactions has largely been overlooked. This paper explores the opportunity to enrich the HFI experience by treating food not merely as passive nourishment but as an integral material in computational architecture with input/output capabilities. We introduce “Sonic Delights,” where food is a comestible auditory-gustatory interface to enable users to interact with and consume digital sound. This concept redefines food as a conduit for interactive auditory engagement, shedding light on the untapped multisensory possibilities of merging taste with digital sound. An associated study allowed us to articulate design insights for forthcoming HFI endeavors that seek to weave food into multisensory design, aiming to further the integration of digital interactivity with the culinary arts.
3
"Grab the Chat and Stick It to My Wall": Understanding How Social VR Streamers Bridge Immersive VR Experiences with Streaming Audiences Outside VR
Yang Hu (Clemson University, Clemson, South Carolina, United States)Guo Freeman (Clemson University, Clemson, South Carolina, United States)Ruchi Panchanadikar (Clemson University, Clemson, South Carolina, United States)
Social VR platforms are increasingly transforming online social spaces by enhancing embodied and immersive social interactions within VR. However, how social VR users also share their activities outside the social VR platform, such as on 2D live streaming platforms, is an increasingly popular yet understudied phenomenon that blends social VR and live streaming research. Through 17 interviews with experienced social VR streamers, we unpack social VR streamers' innovative strategies to further blur the boundary between VR and non-VR spaces to engage their audiences and potential limitations of their strategies. We add new insights into how social VR streamers transcend traditional 2D streamer-audience engagement, which also extend our current understandings of cross-reality interactions. Grounded in these insights, we propose design implications to better support more complicated cross-reality dynamics in social VR streaming while mitigating potential tensions, in hopes of achieving more inclusive, engaging, and secure cross-reality environments in the future.
3
Cross, Dwell, or Pinch: Designing and Evaluating Around-Device Selection Methods for Unmodified Smartwatches
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Jiwan Son (KAIST, Daejeon, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Smartwatches offer powerful features, but their small touchscreens limit the expressiveness of the input that can be achieved. To address this issue, we present, and open-source, the first sonar-based around-device input on an unmodified consumer smartwatch. We achieve this using a fine-grained, one-dimensional sonar-based finger-tracking system. In addition, we use this system to investigate the fundamental issue of how to trigger selections during around-device smartwatch input through two studies. The first examines the methods of double-crossing, dwell, and finger tap in a binary task, while the second considers a subset of these designs in a multi-target task and in the presence and absence of haptic feedback. Results showed double-crossing was optimal for binary tasks, while dwell excelled in multi-target scenarios, and haptic feedback enhanced comfort but not performance. These findings offer design insights for future around-device smartwatch interfaces that can be directly deployed on today’s consumer hardware.