注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

9
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pat Pataranutaporn (Massachusetts Institute of Technology, Boston, Massachusetts, United States)Chayapatr Archiwaranguprok (University of the Thai Chamber of Commerce, Bangkok, Thailand)Samantha W. T.. Chan (MIT Media Lab, Cambridge, Massachusetts, United States)Elizabeth Loftus (UC Irvine, Irvine, California, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
6
Sprayable Sound: Exploring the Experiential and Design Potential of Physically Spraying Sound Interaction
Jongik Jeon (KAIST, Deajeon, Korea, Republic of)Chang Hee Lee (KAIST (Korea Advanced Institute of Science and Technology), Daejoen, Korea, Republic of)
Perfume and fragrance have captivated people for centuries across different cultures. Inspired by the ephemeral nature of sprayable olfactory interactions and experiences, we explore the potential of applying a similar interaction principle to the auditory modality. In this paper, we present SoundMist, a sonic interaction method that enables users to generate ephemeral auditory presences by physically dispersing a liquid into the air, much like the fading phenomenon of fragrance. We conducted a study to understand the experiential factors inherent in sprayable sound interaction and held an ideation workshop to identify potential design spaces or opportunities that this interaction could shape. Our findings, derived from thematic analysis, suggest that physically sprayable sound interaction can induce experiences related to four key factors—materiality of sound produced by dispersed liquid particles, different sounds entangled with each liquid, illusive perception of temporally floating sound, and enjoyment derived from blending different sounds—and can be applied to artistic practices, safety indications, multisensory approaches, and emotional interfaces.
6
"It Brought the Model to Life": Exploring the Embodiment of Multimodal I3Ms for People who are Blind or have Low Vision
Samuel Reinders (Monash University, Melbourne, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
3D-printed models are increasingly used to provide people who are blind or have low vision (BLV) with access to maps, educational materials, and museum exhibits. Recent research has explored interactive 3D-printed models (I3Ms) that integrate touch gestures, conversational dialogue, and haptic vibratory feedback to create more engaging interfaces. Prior research with sighted people has found that imbuing machines with human-like behaviours, i.e., embodying them, can make them appear more lifelike, increasing social perception and presence. Such embodiment can increase engagement and trust. This work presents the first exploration into the design of embodied I3Ms and their impact on BLV engagement and trust. In a controlled study with 12 BLV participants, we found that I3Ms using specific embodiment design factors, such as haptic vibratory and embodied personified voices, led to an increased sense of liveliness and embodiment, as well as engagement, but had mixed impact on trust.
5
Beyond Vacuuming: How Can We Exploit Domestic Robots’ Idle Time?
Yoshiaki Shiokawa (University of Bath, Bath, United Kingdom)Winnie Chen (University of Bath, Bath, United Kingdom)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Jason Alexander (University of Bath, Bath, United Kingdom)Adwait Sharma (University of Bath, Bath, United Kingdom)
We are increasingly adopting domestic robots (e.g., Roomba) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. They typically possess advanced mobility and sensing capabilities, and therefore have significant potential applications beyond their designed use. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles. We conducted two studies: an online survey (n=50) to understand current usage patterns of these robots within homes and an exploratory study (n=12) with HCI and HRI experts. Our thematic analysis revealed 12 key dimensions for developing interactions with domestic robots and outlined over 100 use cases, illustrating how these robots can offer proactive assistance and provide privacy. Finally, we implemented a proof-of-concept prototype to demonstrate the feasibility of reappropriating domestic robots for diverse ubiquitous computing applications.
5
EchoBreath: Continuous Respiratory Behavior Recognition in the Wild via Acoustic Sensing on Smart Glasses
Kaiyi Guo (shanghai jiao tong university, shanghai, China)Qian Zhang (Shanghai Jiao Tong University, Shanghai, China)Dong Wang (Shanghai Jiao Tong University, Shanghai, China)
Monitoring the occurrence count of abnormal respiratory symptoms helps provide critical support for respiratory health. While this is necessary, there is still a lack of an unobtrusive and reliable way that can be effectively used in real-world settings. In this paper, we present EchoBreath, a passive and active acoustic combined sensing system for abnormal respiratory symptoms monitoring. EchoBreath novelly uses the speaker and microphone under the frame of the glasses to emit ultrasonic waves and capture both passive sounds and echo profiles, which can effectively distinguish between subject-aware behaviors and background noise. Furthermore, A lightweight neural network with the 'Null' class and open-set filtering mechanisms substantially improves real-world applicability by eliminating unrelated activity. Our experiments, involving 25 participants, demonstrate that EchoBreath can recognize 6 typical respiratory symptoms in a laboratory setting with an accuracy of 93.1%. Additionally, an in-the-semi-wild study with 10 participants further validates that EchoBreath can continuously monitor respiratory abnormalities under real-world conditions. We believe that EchoBreath can serve as an unobtrusive and reliable way to monitor abnormal respiratory symptoms.
5
What Comes After Noticing?: Reflections on Noticing Solar Energy and What Came Next
Angella Mackey (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)David NG. McCallum (Rotterdam University of Applied Science, Rotterdam, Netherlands)Oscar Tomico (Eindhoven University of Technology, Eindhoven, Netherlands)Martijn de Waal (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)
Many design researchers have been exploring what it means to take a more-than-human design approach in their practice. In particular, the technique of “noticing” has been explored as a way of intentionally opening a designer’s awareness to more-than-human worlds. In this paper we present autoethnographic accounts of our own efforts to notice solar energy. Through two studies we reflect on the transformative potential of noticing the more-than-human, and the difficulties in trying to sustain this change in oneself and one’s practice. We propose that noticing can lead to activating exiled capacities within the noticer, relational abilities that lie dormant in each of us. We also propose that emphasising sense-fullness in and through design can be helpful in the face of broader psychological or societal boundaries that block paths towards more relational ways of living with non-humans.
5
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation
Jessica He (IBM Research, Yorktown Heights, New York, United States)Stephanie Houde (IBM Research, Cambridge, Massachusetts, United States)Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.
5
"It Brought Me Joy": Opportunities for Spatial Browsing in Desktop Screen Readers
Arnavi Chheda-Kothary (University of Washington, Seattle, Washington, United States)Ather Sharif (University of Washington, Seattle, Washington, United States)David Angel. Rios (Columbia University, New York, New York, United States)Brian A.. Smith (Columbia University, New York, New York, United States)
Blind or low-vision (BLV) screen-reader users have a significantly limited experience interacting with desktop websites compared to non-BLV, i.e., sighted users. This digital divide is exacerbated by the incapability to browse the web spatially—an affordance that leverages spatial reasoning, which sighted users often rely on. In this work, we investigate the value of and opportunities for BLV screen-reader users to browse websites spatially (e.g., understanding page layouts). We additionally explore at-scale website layout understanding as a feature of desktop screen readers. We created a technology probe, WebNExt, to facilitate our investigation. Specifically, we conducted a lab study with eight participants and a five-day field study with four participants to evaluate spatial browsing using WebNExt. Our findings show that participants found spatial browsing intuitive and fulfilling, strengthening their connection to the design of web pages. Furthermore, participants envisioned spatial browsing as a step toward reducing the digital divide.
5
Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives
Meredith Ringel. Morris (Google DeepMind, Seattle, Washington, United States)Jed R.. Brubaker (University of Colorado Boulder, Boulder, Colorado, United States)
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services. We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts. We then introduce a novel design space detailing potential implementations of generative ghosts. We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.
5
Toward Affective Empathy via Personalized Analogy Generation: A Case Study on Microaggression
Hyojin Ju (POSTECH, Pohang, Korea, Republic of)Jungeun Lee (POSTECH, Pohang, Korea, Republic of)Seungwon Yang (POSTECH, Pohang-si, Korea, Republic of)Jungseul Ok (POSTECH, Pohang, Korea, Republic of)Inseok Hwang (POSTECH, Pohang, Korea, Republic of)
The importance of empathy cannot be overstated in modern societies where people of diverse backgrounds increasingly interact together. The HCI community has strived to foster affective empathy through immersive technologies. Many previous techniques are built upon a premise that presenting the same experience as-is may help evoke the same emotion, which however faces limitations in matters where the emotional responses largely differ across individuals. In this paper, we present a novel concept of generating a personalized experience based on a large language model (LLM) to facilitate affective empathy between individuals despite their differences. As a case study to showcase its effectiveness, we developed EmoSync, an LLM-based agent that generates personalized analogical microaggression situations, facilitating users to personally resonate with a specific microaggression situation of another person. EmoSync is designed and evaluated along a 3-phased user study with 100+ participants. We comprehensively discuss implications, limitations, and possible applications.
4
Ego vs. Exo and Active vs. Passive: Investigating the Individual and Combined Effects of Viewpoint and Navigation on Spatial Immersion and Understanding in Immersive Storytelling
Tao Lu (Georgia Institute of Technology, Atlanta, Georgia, United States)Qian Zhu (The Hong Kong University of Science and Technology, Hong Kong, China)Tiffany S. Ma (Georgia Institute of Technology, Atlanta, Georgia, United States)Wong Kam-Kwai (The Hong Kong University of Science and Technology, Hong Kong, China)Anlan Xie (Georgia Institute of Technology, Atlanta, Georgia, United States)Alex Endert (Georgia Institute of Technology, Atlanta, Georgia, United States)Yalong Yang (Georgia Institute of Technology, Atlanta, Georgia, United States)
Visual storytelling combines visuals and narratives to communicate important insights. While web-based visual storytelling is well-established, leveraging the next generation of digital technologies for visual storytelling, specifically immersive technologies, remains underexplored. We investigated the impact of the story viewpoint (from the audience's perspective) and navigation (when progressing through the story) on spatial immersion and understanding. First, we collected web-based 3D stories and elicited design considerations from three VR developers. We then adapted four selected web-based stories to an immersive format. Finally, we conducted a user study (N=24) to examine egocentric and exocentric viewpoints, active and passive navigation, and the combinations they form. Our results indicated significantly higher preferences for egocentric+active (higher agency and engagement) and exocentric+passive (higher focus on content). We also found a marginal significance of viewpoints on story understanding and a strong significance of navigation on spatial immersion.
4
Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Jiaji Li (MIT, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Yujia Liu (Tsinghua University, Beijing, China)Emily Guan (Pratt Institute, Brooklyn, New York, United States)Guanyun Wang (Zhejiang University, Hangzhou, China)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
4
InsightBridge: Enhancing Empathizing with Users through Real-Time Information Synthesis and Visual Communication
Junze Li (The Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Yue Zhang (Shenzhen University, Shenzhen, China)Chengbo Zheng (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Dingdong Liu (The Hong Kong University of Science and Technology, Hong Kong , China)Zeyu Huang (The Hong Kong University of Science and Technology, New Territories, Hong Kong)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)
User-centered design necessitates researchers deeply understanding target users throughout the design process. However, during early-stage user interviews, researchers may misinterpret users due to time constraints, incorrect assumptions, and communication barriers. To address this challenge, we introduce InsightBridge, a tool that supports real-time, AI-assisted information synthesis and visual-based verification. InsightBridge automatically organizes relevant information from ongoing interview conversations into an empathy map. It further allows researchers to specify elements to generate visual abstracts depicting the selected information, and then review these visuals with users to refine the visuals as needed. We evaluated the effectiveness of InsightBridge through a within-subject study (N=32) from both the researchers’ and users’ perspectives. Our findings indicate that InsightBridge can assist researchers in note-taking and organization, as well as in-time visual checking, thereby enhancing mutual understanding with users. Additionally, users’ discussions of visuals prompt them to recall overlooked details and scenarios, leading to more insightful ideas.
4
FlexiVol: a Volumetric Display with an Elastic Diffuser to Enable Reach-Through Interaction
Elodie Bouzbib (Universidad Publica de Navarra, Pamplona, Spain)Iosune Sarasate Azcona (Universidad Pública de Navarra, Pamplona, Spain)Unai Javier Fernández (Universidad Pública de Navarra, Pamplona, Spain)Ivan Fernández (Universidad Pública de Navarra, Pamplona, Navarra, Spain)Manuel Lopez-Amo (Universidad Pública de Navarra, Pamplona, Spain)Iñigo Ezcurdia (Public University of Navarra, Pamplona, Spain)Asier Marzo (Universidad Publica de Navarra, Pamplona, Navarre, Spain)
Volumetric displays render true 3D graphics without forcing users to wear headsets or glasses. However, the optical diffusers that volumetric displays employ are rigid and thus do not allow for direct interaction. FlexiVol employs elastic diffusers to allow users to reach inside the display volume to have direct interaction with true 3D content. We explored various diffuser materials in terms of visual and mechanical properties. We correct the distortions of the volumetric graphics projected on elastic oscillating diffusers and propose a design space for FlexiVol, enabling various gestures and actions through direct interaction techniques. A user study suggests that selection, docking and tracing tasks can be performed faster and more precisely using direct interaction when compared to indirect interaction with a 3D mouse. Finally, applications such as a virtual pet or landscape edition highlight the advantages of a volumetric display that supports direct interaction.
4
How Can Interactive Technology Help Us to Experience Joy With(in) the Forest? Towards a Taxonomy of Tech for Joyful Human-Forest Interactions
Ferran Altarriba Bertran (Tampere University, Tampere, Finland)Oğuz 'Oz' Buruk (Tampere University, Tampere, Finland)Jordi Márquez Puig (Universitat de Girona, Salt, Girona, Spain)Juho Hamari (Tampere University, Tampere, Finland)
This paper presents intermediate-level knowledge in the form of a taxonomy that highlights 12 different ways in which interactive tech might support forest-related experiences that are joyful for humans. It can inspire and provide direction for designs that aim to enrich the experiential texture of forests. The taxonomy stemmed from a reflexive analysis of 104 speculative ideas produced during a year-long co-design process, where we co-experienced and creatively engaged a diverse range forests and forest-related activities with 250+ forest-goers with varied backgrounds and sensitivities. Given that breadth of forests and populations involved, our work foregrounds a rich set of design directions that set an actionable early frame for creating tech that supports joyful human-forest interplays – one that we hope will be extended and consolidated in future research, ours and others'.
4
Exploring Mobile Touch Interaction with Large Language Models
Tim Zindulka (University of Bayreuth, Bayreuth, Germany)Jannek Maximilian. Sekowski (University of Bayreuth, Bayreuth, Germany)Florian Lehmann (University of Bayreuth, Bayreuth, Germany)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)
Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface. In this paper, we propose to control the LLM via touch gestures performed directly on the text. We first chart a design space that covers fundamental touch input and text transformations. In this space, we then concretely explore two control mappings: spread-to-generate and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a user study (N=14) that compares three feedback designs: no visualisation, text length indicator, and length + word indicator. The results demonstrate that touch-based control of LLMs is both feasible and user-friendly, with the length + word indicator proving most effective for managing text generation. This work lays the foundation for further research into gesture-based interaction with LLMs on touch devices.
4
Attracting Fingers with Waves: Potential Fields Using Active Lateral Forces Enhance Touch Interactions
Zhaochong Cai (Delft University of Technology, Delft, Netherlands)David Abbink (Delft University of Technology, Delft, Netherlands)Michael Wiertlewski (Delft University of Technology, Delft, Netherlands)
Touchscreens and touchpads offer intuitive interfaces but provide limited tactile feedback, usually just mechanical vibrations. These devices lack continuous feedback to guide users’ fingers toward specific directions. Recent innovations in surface haptic devices, however, leverage ultrasonic traveling waves to create active lateral forces on a bare fingertip. This paper \revised{investigates the effects and design possibilities of active forces feedback in touch interactions by rendering artificial potential fields on a touchpad.Three user studies revealed that: (1) users perceived attractive and repulsive fields as bumps and holes with similar detection thresholds; (2) step-wise force fields improved targeting by 22.9% compared to friction-only methods; and (3) active force fields effectively communicated directional cues to the users. Several applications were tested, with user feedback favoring this approach for its enhanced tactile experience, added enjoyment, realism, and ease of use.
4
Understanding and Supporting Peer Review Using AI-reframed Positive Summary
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Alarith Uhde (The University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)
While peer review enhances writing and research quality, harsh feedback can frustrate and demotivate authors. Hence, it is essential to explore how critiques should be delivered to motivate authors and enable them to keep iterating their work. In this study, we explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task, alongside varying levels of overall evaluations (high vs. low), on authors’ feedback reception, revision outcomes, and motivation to revise. Through a 2x2 online experiment with 137 participants, we found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors’ critique acceptance, whereas low overall evaluations of their work led to increased revision efforts. We discuss the implications of using AI in peer feedback, focusing on how AI-driven critiques can influence critique acceptance and support research communities in fostering productive and friendly peer feedback practices.
4
BIT: Battery-free, IC-less and Wireless Smart Textile Interface and Sensing System
Weiye Xu (Tsinghua University, Beijing, China)Tony Li (Stony Brook University, Stony Brook, New York, United States)Yuntao Wang (Tsinghua University, Beijing, China)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
The development of smart textile interfaces is hindered by the inclusion of rigid hardware components and batteries within the fabric, which pose challenges in terms of manufacturability, usability, and environmental concerns related to electronic waste. To mitigate these issues, we propose a smart textile interface and its wireless sensing system to eliminate the need for ICs, batteries, and connectors embedded into textiles. Our technique is established on the integration of multi-resonant circuits in smart textile interfaces, and utilizing near-field electromagnetic coupling between two coils to facilitate wireless power transfer and data acquisition from smart textile interface.A key aspect of our system is the development of a mathematical model that accurately represents the equivalent circuit of the sensing system. Using this model, we developed a novel algorithm to accurately estimate sensor signals based on changes in system impedance. Through simulation-based experiments and a user study, we demonstrate that our technique effectively supports multiple textile sensors of various types.
4
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot Interaction
Jan Leusmann (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Thomas Liang (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Chao Wang (Honda Research Institute Europe, Offenbach/Main, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)
Understanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.
4
Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in Videoconferencing
Carlota Vazquez Gonzalez (King's College London, London, United Kingdom)Timothy Neate (King's College London, London, United Kingdom)Rita Borgo (Kings College London, London, England, United Kingdom)
Videoconferencing is integral to modern work and living. Recently, technologists have sought to leverage data captured -- e.g. from cameras and microphones -- to augment communication. This might mean capturing communication information about verbal (e.g. speech, chat messages), or non-verbal exchanges (e.g. body language, gestures, tone of voice) and using this to mediate -- and potentially improve -- communication. However, such tracking has implications for user experience and raises wider concerns (e.g. privacy). To design tools which account for user needs and preferences, this study investigates perspectives on communication tracking through a global survey and interviews, exploring how daily behaviours and the impact of specific features influence user perspectives. We examine user preferences on non-verbal communication tracking, preferred methods of how this information is conveyed and to whom this should be communicated. Our findings aim to guide the development of non-verbal communication tools which augment videoconferencing that prioritise user needs.
4
SqueezeMe: Creating Soft Inductive Pressure Sensors with Ferromagnetic Elastomers
Thomas Preindl (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Andreas Pointner (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Nimal Jagadeesh Kumar (University of Sussex, Brighton, United Kingdom)Nitzan Cohen (Free University of Bozen-Bolzano, Bolzano, Italy)Niko Münzenrieder (Free University of Bozen Bolzano, Bozen-Bolzano, Italy)Michael Haller (Free University of Bozen-Bolzano, Bolzano, Italy)
We introduce SqueezeMe, a soft and flexible inductive pressure sensor with high sensitivity made from ferromagnetic elastomers for wearable and embedded applications. Constructed with silicone polymers and ferromagnetic particles, this biocompatible sensor responds to pressure and deformation by varying inductance through ferromagnetic particle density changes, enabling precise measurements. We detail the fabrication process and demonstrate how silicones with varying Shore hardness and different ferromagnetic fillers affect the sensor's sensitivity. Applications like weight, air pressure, and pulse measurements showcase the sensor’s versatility for integration into soft robotics and flexible electronics.
4
Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
Martin Feick (DFKI and Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Xuxin Tang (Computer Science Department, Blacksburg, Virginia, United States)Raul Garcia-Martin (Universidad Carlos III de Madrid, Leganes, Madrid, Spain)Alexandru Luchianov (MIT CSAIL, Cambridge, Massachusetts, United States)Roderick Wei Xiao. Huang (MIT CSAIL, Cambridge, Massachusetts, United States)Chang Xiao (Adobe Research, San Jose, California, United States)Alexa Siu (Adobe Research, San Jose, California, United States)Mustafa Doga Dogan (Adobe Research, Basel, Switzerland)
Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.
4
Customizing Emotional Support: How Do Individuals Construct and Interact With LLM-Powered Chatbots
Xi Zheng (City University of Hong Kong, Hong Kong, China)Zhuoyang LI (City University of Hong Kong, Hong Kong, China)Xinning Gui (The Pennsylvania State University, University Park, Pennsylvania, United States)Yuhan Luo (City University of Hong Kong, Hong Kong, China)
Personalized support is essential to fulfill individuals’ emotional needs and sustain their mental well-being. Large language models (LLMs), with great customization flexibility, hold promises to enable individuals to create their own emotional support agents. In this work, we developed ChatLab, where users could construct LLM-powered chatbots with additional interaction features including voices and avatars. Using a Research through Design approach, we conducted a week-long field study followed by interviews and design activities (N = 22), which uncovered how participants created diverse chatbot personas for emotional reliance, confronting stressors, connecting to intellectual discourse, reflecting mirrored selves, etc. We found that participants actively enriched the personas they constructed, shaping the dynamics between themselves and the chatbot to foster open and honest conversations. They also suggested other customizable features, such as integrating online activities and adjustable memory settings. Based on these findings, we discuss opportunities for enhancing personalized emotional support through emerging AI technologies.
4
SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization
Artem Dementyev (Google Inc., Mountain View, California, United States)Dimitri Kanevsky (Google, Mountain View, California, United States)Samuel Yang (Google, Mountain View, California, United States)Mathieu Parvaix (Google Research, Mountain View, California, United States)Chiong Lai (Google, Mountain View, California, United States)Alex Olwal (Google Inc., Mountain View, California, United States)
Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.
4
Towards Understanding Interactive Sonic Gastronomy with Chefs and Diners
Hongyue Wang (Monash University, Melbourne, Australia)Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Linjia He (Monash University, Melbourne, Australia)Nathalie Overdevest (Monash University, Clayton, VIC, Australia)Ryan Wee (Monash University, Melbourne, Victoria, Australia)Yan Wang (Monash University, Melbourne, Australia)Phoebe O.. Toups Dugas (Monash University, Melbourne, Australia)Don Samitha Elvitigala (Monash University, Melbourne, Australia)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
With advancements in interactive technologies, research in human-food interaction (HFI) has begun to employ interactive sound to enrich the dining experience. However, chefs' creative use of this sonic interactivity as a new "ingredient" in their culinary practices remains underexplored. In response, we conducted an empirical study with six pairs of chefs and diners utilizing SoniCream, an ice cream cone that plays digital sounds while consuming. Through exploration, creation, collaboration, and reflection, we identified four themes concerning culinary creativity, dining experience, interactive sonic gastronomy deployment, and chef-diner interplay. Building on the discussions at the intersection of these themes, we derived four design implications for creating interactive systems that could support chefs' culinary creativity, thereby enriching dining experiences. Ultimately, our work aims to help interaction designers fully incorporate chefs' perspectives into HFI research.
4
Everything to Gain: Combining Area Cursors with increased Control-Display Gain for Fast and Accurate Touchless Input
Kieran Waugh (University of Glasgow , Glasgow, Scotland, United Kingdom)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Euan Freeman (University of Glasgow, Glasgow, United Kingdom)
Touchless displays often use mid-air gestures to control on-screen cursors for pointer interactions. Area cursors can simplify touchless cursor input by implicitly targeting nearby widgets without the cursor entering the target. However, for displays with dense target layouts, the cursor still has to arrive close to the widget, meaning the benefits of area cursors for time-to-target and effort are diminished. Through two experiments, we demonstrate for the first time that fine-tuning the mapping between hand and cursor movements (control-display gain -- CDG) can address the deficiencies of area cursors and improve the performance of touchless interaction. Across several display sizes and target densities (representative of myriad public displays used in retail, transport, museums, etc), our findings show that the forgiving nature of an area cursor compensates for the imprecision of a high CDG, helping users interact more effectively with smaller and more controlled hand/arm movements.
3
Co-design & Evaluation of Visual Interventions for Head Posture Correction in Virtual Reality Games
Minh Duc Dang (Simon Fraser University, Burnaby, British Columbia, Canada)Duy Phuoc Luong (Simon Fraser University, Burnaby, British Columbia, Canada)Christopher Napier (Simon Fraser University, Burnaby, British Columbia, Canada)Lawrence H. Kim (Simon Fraser University, Burnaby, British Columbia, Canada)
While virtual reality (VR) games offer immersive experiences, prolonged improper head posture during VR gaming sessions can cause neck discomfort and injuries. To address this issue, we prototyped a framework to detect instances of improper head posture and apply various visual interventions to correct them. After assessing the prototype's usability in a co-design workshop with participants experienced in VR design and kinesiology, we refined the interventions in two main directions --- using explicit visual indicators or employing implicit background changes. The refined interventions were subsequently tested in a controlled experiment involving a target selection task. The study results demonstrate that the interventions effectively helped participants maintain better head posture during VR gameplay compared to the control condition.
3
"A Tool for Freedom": Co-Designing Mobility Aid Improvements Using Personal Fabrication and Physical Interface Modules with Primarily Young Adults
Jerry Cao (University of Washington, Seattle, Washington, United States)Krish Jain (University of Washington, Seattle, Washington, United States)Julie Zhang (University of Washington, Seattle, Washington, United States)Yuecheng Peng (University of Washington, Seattle, Washington, United States)Shwetak Patel (University of Washington, Seattle, Washington, United States)Jennifer Mankoff (University of Washington, Seattle, Washington, United States)
Mobility aids (e.g., canes, crutches, and wheelchairs) are crucial for people with mobility disabilities; however, pervasive dissatisfaction with these aids keeps usage rates low. Through semi-structured interviews with 17 mobility aid users, mostly under the age of 30, we identified specific sources of dissatisfaction among younger users of mobility aids, uncovered community-based solutions for these dissatisfactions, and explored ways these younger users wanted to improve mobility aids. We found that users sought customizable, reconfigurable, multifunctional, and more aesthetically pleasing mobility aids. Participants' feedback guided our prototyping of tools/accessories, such as laser cut decorative skins, hot-swappable physical interface modules, and modular canes with custom 3D-printed handles. These prototypes were then the focus of additional co-design sessions where six returning participants offered suggestions for improvements and provided feedback on their usefulness and usability. Our findings highlight that many mobility aid users have the desire, ability, and need to customize and improve their aids in different ways compared to older adults. We propose various solutions and design guidelines to facilitate the modifications of mobility aids.
3
Virtual Worlds Beyond Sight: Designing and Evaluating an Audio-Haptic System for Non-Visual VR Exploration
Aayush Shrestha (Dalhousie University, Halifax, Nova Scotia, Canada)Joseph Malloch (Dalhousie University, Halifax, Nova Scotia, Canada)
Contemporary research in Virtual Reality (VR) for users who are visually impaired often employs navigation and interaction modalities that are either non-conventional or constrained by physical spaces or both. We designed and examined a hapto-acoustic VR system that mitigates this by enabling non-visual exploration of large virtual environments using white cane simulation and walk-in place locomotion. The system features a complex urban cityscape incorporating a physical cane prototype coupled with a virtual cane for rendering surface textures and an omnidirectional slide mill for navigation. In addition, spatialized audio is rendered based on the progression of sound through the geometry around the user. A study involving twenty sighted participants evaluated the system through three formative tasks while blindfolded to simulate absolute blindness. 19/20 participants successfully completed all the tasks while effectively navigating through the environment. This work highlights the potential for accessible non-visual VR experiences requiring minimal training and limited prior VR exposure.
3
LLM Powered Text Entry Decoding and Flexible Typing on Smartphones
Yan Ma (Stony Brook University, Stony Brook, New York, United States)Dan Zhang (Stony Brook University, New York city, New York, United States)IV Ramakrishnan (Stony Brook University, Stony Brook, New York, United States)Xiaojun Bi (Stony Brook University, Stony Brook, New York, United States)
Large language models (LLMs) have shown exceptional performance in various language-related tasks. However, their application in keyboard decoding, which involves converting input signals (e.g. taps and gestures) into text, remains underexplored. This paper presents a fine-tuned FLAN-T5 model for decoding. It achieves 93.1% top-1 accuracy on user-drawn gestures, outperforming the widely adopted SHARK2 decoder, and 95.4% on real-word tap typing data. In particular, our decoder supports Flexible Typing, allowing users to enter a word with taps, gestures, multi-stroke gestures, and tap-gesture combinations. User study results show that Flexible Typing is beneficial and well-received by participants, where 35.9% of words were entered using word gestures, 29.0% with taps, 6.1% with multi-stroke gestures, and the remaining 29.0% using tap-gestures. Our investigation suggests that the LLM-based decoder improves decoding accuracy over existing word gesture decoders while enabling the Flexible Typing method, which enhances the overall typing experience and accommodates diverse user preferences.
3
Voxel Invention Kit: Reconfigurable Building Blocks for Prototyping Interactive Electronic Structures
Miana Smith (MIT, Cambridge, Massachusetts, United States)Jack Forman (MIT, Cambridge, Massachusetts, United States)Amira Abdel Rahman (MIT , Cambridge, Massachusetts, United States)Sophia Wang (MIT, Cambridge, Massachusetts, United States)Neil Gershenfeld (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)
Prototyping large, electronically integrated structures is challenging and often results in unwieldy wiring, weak mechanical properties, expensive iterations, or limited reusability. While many electronics prototyping kits exist for small-scale objects, relatively few methods exist to freely iterate large and sturdy structures with integrated electronics. To address this gap, we present the Voxel Invention Kit (VIK), which uses reconfigurable blocks that assemble into high-stiffness, lightweight structures with integrated electronics. We do this by creating cubic blocks composed of PCBs that carry electrical routing and components and can be (re)configured with simple tools into a variety of structures. To ensure structural stability without expertise, we created a tool to configure structures and simulate applied loads, which we validated with mechanical testing data. Using VIK, we produced devices reconfigured from a shared set of voxels: multiple iterations of a customizable AV lounge seat, a dance floor game, and a force-sensing bridge.
3
Cross, Dwell, or Pinch: Designing and Evaluating Around-Device Selection Methods for Unmodified Smartwatches
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Jiwan Son (KAIST, Daejeon, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Smartwatches offer powerful features, but their small touchscreens limit the expressiveness of the input that can be achieved. To address this issue, we present, and open-source, the first sonar-based around-device input on an unmodified consumer smartwatch. We achieve this using a fine-grained, one-dimensional sonar-based finger-tracking system. In addition, we use this system to investigate the fundamental issue of how to trigger selections during around-device smartwatch input through two studies. The first examines the methods of double-crossing, dwell, and finger tap in a binary task, while the second considers a subset of these designs in a multi-target task and in the presence and absence of haptic feedback. Results showed double-crossing was optimal for binary tasks, while dwell excelled in multi-target scenarios, and haptic feedback enhanced comfort but not performance. These findings offer design insights for future around-device smartwatch interfaces that can be directly deployed on today’s consumer hardware.
3
Peek into the `White-Box': A Field Study on Bystander Engagement with Urban Robot Uncertainty
Xinyan Yu (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Marius Hoggenmüller (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Tram Thi Minh. Tran (School of Architecture, Design and Planning, The University of Sydney, Sydney, NSW, Australia)Yiyuan Wang (The University of Sydney, Sydney, Australia)Qiuming Zhang (The University of Sydney, Sydney, NSW, Australia)Martin Tomitsch (University of Technology Sydney, Sydney, NSW, Australia)
Uncertainty inherently exists in the autonomous decision-making process of robots. Involving humans in resolving this uncertainty not only helps robots mitigate it but is also crucial for improving human-robot interactions. However, in public urban spaces filled with unpredictability, robots often face heightened uncertainty without direct human collaborators. This study investigates how robots can engage bystanders for assistance in public spaces when encountering uncertainty and examines how these interactions impact bystanders' perceptions and attitudes towards robots. We designed and tested a speculative `peephole' concept that engages bystanders in resolving urban robot uncertainty. Our design is guided by considerations of non-intrusiveness and eliciting initiative in an implicit manner, considering bystanders' unique role as non-obligated participants in relation to urban robots. Drawing from field study findings, we highlight the potential of involving bystanders to mitigate urban robots' technological imperfections to both address operational challenges and foster public acceptance of urban robots. Furthermore, we offer design implications to encourage bystanders' involvement in mitigating the imperfections.
3
Layered Interactions: Exploring Non-Intrusive Digital Craftsmanship Design Through Lacquer Art Interfaces
Yan Dong (academy of fine arts, Beijing, China)Hanjie Yu (Tsinghua University, Beijing, China)Yanran Chen (Tsinghua University, beijing, Haidian, China)Zipeng Zhang (Tsinghua University, Beijing, China)Wu Qiong (Tsinghua University, Beijing, China)
Integrating technology with the distinctive characteristics of craftsmanship has become a key issue in the field of digital craftsmanship. This paper introduces Layered Interactions, a design approach that seamlessly merges Human-Computer Interaction (HCI) technologies with traditional lacquerware craftsmanship. By leveraging the multi-layer structure and material properties of lacquerware, we embed interactive circuits and integrate programmable hardware within the layers, creating tangible interface that support diverse interactions. This method enhances the adaptability and practicality of traditional crafts in modern digital contexts. Through the development of a lacquerware toolkit, along with user experiments and semi-structured interviews, we demonstrate that this approach not only makes technology more accessible to traditional artisans but also enhances the materiality and emotional qualities of interactive interfaces. Additionally, it fosters mutual learning and collaboration between artisans and technologists. Our research introduces a cross-disciplinary perspective to the HCI community, broadening the material and design possibilities for interactive interfaces.
3
TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction
Guanyun Wang (Zhejiang University, Hangzhou, China)Chuang Chen (Zhejiang University, HangZhou, China)Xiao Jin (Imperial College London, London, United Kingdom)Yulu Chen (University College London, London, United Kingdom)Yangweizhe Zheng (Northeast Forestry University, Harbin, China)Qianzi Zhen (Zhejiang University, HangZhou, China)Yang Zhang (Imperial College London, London, United Kingdom)Jiaji Li (MIT, Cambridge, Massachusetts, United States)Yue Yang (Zhejiang University, Hangzhou, China)Ye Tao (Hangzhou City University, Hangzhou, China)Shijian Luo (Zhejiang University, Hangzhou, Zhejiang, China)Lingyun Sun (Zhejiang University, Hangzhou, China)
Wood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.
3
Since U Been Gone: Augmenting Context-Aware Transcriptions for Re-Engaging in Immersive VR Meetings
Geonsun Lee (University of Maryland, College Park, Maryland, United States)Yue Yang (Stanford University, Stanford, California, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Dinesh Manocha (University of Maryland , College Park, Maryland, United States)
Maintaining engagement in immersive meetings is challenging, particularly when users must catch up on missed content after disruptions. While transcription interfaces can help, table-fixed panels have the potential to distract users from the group, diminishing social presence, while avatar-fixed captions fail to provide past context. We present EngageSync, a context-aware avatar-fixed transcription interface that adapts based on user engagement, offering live transcriptions and LLM-generated summaries to enhance catching up while preserving social presence. We implemented a live VR meeting setup for a 12-participant formative study and elicited design considerations. In two user studies with small (3 avatars) and mid-sized (7 avatars) groups, EngageSync significantly improved social presence (𝑝 < .05) and time spent gazing at others in the group instead of the interface over table-fixed panels. Also, it reduced re-engagement time and increased information recall (𝑝 < .05) over avatar-fixed interfaces, with stronger effects in mid-sized groups (𝑝 < .01).
3
Creating Furniture-Scale Deployable Objects with a Computer-Controlled Sewing Machine
Sapna Tayal (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lea Albaugh (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)James McCann (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Scott E. Hudson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We introduce a novel method for fabricating functional flat-to-shape objects using a large computer-controlled sewing machine (11 ft / 3.4m wide), a process that is both rapid and scalable beyond the machine's sewable area. Flat-to-shape deployable objects can allow for quick and easy need-based activation, but the selective flexibility required can involve complex fabrication or tedious assembly. In our method, we sandwich rigid form-defining materials, such as plywood and acrylic, between layers of fabric. The sewing process secures these layers together, creating soft hinges between the rigid inserts which allow the object to transition smoothly into its three-dimensional functional form with little post-processing.
3
Haptic Empathy: Investigating Individual Differences in Affective Haptic Communications
Yulan Ju (Keio University Graduate School of Media Design, Yokohama, Japan)Xiaru Meng (Keio University Graduate School of Media Design, Yokohama, Japan)Harunobu Taguchi (Keio University Graduate School of Media Design, Yokohama, Japan)Tamil Selvan Gunasekaran (The University of Auckland, Auckland, New Zealand)Matthias Hoppe (Keio University Graduate School of Media Design, Yokohama, Japan)Hironori Ishikawa (NTT DOCOMO, Tokyo, Japan)Yoshihiro Tanaka (Nagoya Institute of Technology, Nagoya, Japan)Yun Suen Pai (University of Auckland, Auckland, New Zealand)Kouta Minamizawa (Keio University Graduate School of Media Design, Yokohama, Japan)
Nowadays, touch remains essential for emotional conveyance and interpersonal communication as more interactions are mediated remotely. While many studies have discussed the effectiveness of using haptics to communicate emotions, incorporating affect into haptic design still faces challenges due to individual user tactile acuity and preferences. We assessed the conveying of emotions using a two-channel haptic display, emphasizing individual differences. First, 24 participants generated 187 haptic messages reflecting their immediate sentiments after watching 8 emotionally charged film clips. Afterwards, 19 participants were asked to identify emotions from haptic messages designed by themselves and others, yielding 593 samples. Our findings indicate that the ability to decode haptic messages is linked to specific emotional traits, particularly Emotional Competence (EC) and Affect Intensity Measure (AIM). Additionally, qualitative analysis revealed three strategies participants used to create touch messages: perceptive, empathetic, and metaphorical expression.
3
Shape-Kit: A Design Toolkit for Crafting On-Body Expressive Haptics
Ran Zhou (University of Chicago, Chicago, Illinois, United States)Jianru Ding (University of Chicago, Chicago, Illinois, United States)Chenfeng Gao (University of Chicago, Chicago, Illinois, United States)Wanli Qian (University of Chicago, Chicago, Illinois, United States)Benjamin Erickson (University of Colorado Boulder, Boulder, Colorado, United States)Madeline Balaam (KTH Royal Institute of Technology, Stockholm, Sweden)Daniel Leithinger (Cornell University, Ithaca, New York, United States)Ken Nakagaki (University of Chicago, Chicago, Illinois, United States)
Driven by the vision of everyday haptics, the HCI community is advocating for “design touch first” and investigating “how to touch well.” However, a gap remains between the exploratory nature of haptic design and technical reproducibility. We present Shape-Kit, a hybrid design toolkit embodying our “crafting haptics” metaphor, where hand touch is transduced into dynamic pin-based sensations that can be freely explored across the body. An ad-hoc tracking module captures and digitizes these patterns. Our study with 14 designers and artists demonstrates how Shape-Kit facilitates sensorial exploration for expressive haptic design. We analyze how designers collaboratively ideate, prototype, iterate, and compose touch experiences and show the subtlety and richness of touch that can be achieved through diverse crafting methods with Shape-Kit. Reflecting on the findings, our work contributes key insights into haptic toolkit design and touch design practices centered on the “crafting haptics” metaphor. We discuss in-depth how Shape-Kit’s simplicity, though remaining constrained, enables focused crafting for deeper exploration, while its collaborative nature fosters shared sense-making of touch experiences.
3
TutorCraftEase: Enhancing Pedagogical Question Creation with Large Language Models
Wenhui Kang (University of Chinese Academy of Sciences, Beijing, China)Lin Zhang (University of Stuttgart, Stuttgart, Germany)Xiaolan Peng (Institute of software,Chinese Academy of Sciences, Beijing, -Select-, China)Hao Zhang (Chinese Academy of Sciences, Beijing, China)Anchi Li (Beijing University of Technology, Beijing, China)Mengyao Wang (the State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China)Jin Huang (Chinese Academy of Sciences, Beijing, China)Feng Tian (Institute of software, Chinese Academy of Sciences, Beijing, China)Guozhong Dai (Chinese Academy of Sciences, Beijing, China)
Pedagogical questions are crucial for fostering student engagement and learning. In daily teaching, teachers pose hundreds of questions to assess understanding, enhance learning outcomes, and facilitate the transfer of theory-rich content. However, even experienced teachers often struggle to generate a large volume of effective pedagogical questions. To address this, we introduce TutorCraftEase, an interactive generation system that leverages large language models (LLMs) to assist teachers in creating pedagogical questions. TutorCraftEase enables the rapid generation of questions at varying difficulty levels with a single click, while also allowing for manual review and refinement. In a comparative user study with 39 participants, we evaluated TutorCraftEase against a traditional manual authoring tool and a basic LLM tool. The results show that TutorCraftEase can generate pedagogical questions comparable in quality to those created by experienced teachers, while significantly reducing their workload and time.
3
Into the Unknown: Leveraging Conversational AI in Supporting Young Migrants' Journeys Towards Cultural Adaptation
Sunok Lee (Aalto University, Espoo, Finland)Dasom Choi (KAIST, Dajeon, Korea, Republic of)Lucy Truong (Aalto University, Espoo, Finland)Nitin Sawhney (Aalto University, Espoo, Finland)Henna Paakki (Aalto University, Espoo, Finland)
Accelerated globalization has made migration commonplace, creating significant cultural adaptation challenges, particularly for young migrants. While HCI research has explored the role of technology in migrants' cultural adaptation, there is a need to address the diverse cultural backgrounds and needs of young migrants specifically. Recognizing the potential of conversational AI to adapt to diverse cultural contexts, we investigate how young migrants could use this technology in their adaptation journey and explore its societal implementation. Through individual workshops with young migrants and stakeholder interviews—including AI practitioners, public sector workers, policy experts, and social scientist—we found that both groups of participants expect conversational AI to support young migrants in connecting with the host culture before migration, exploring the home culture, and aligning identities across home and host cultures. However, challenges such as expectation gaps and cultural bias may hinder cultural adaptation. We discuss design considerations for culturally sensitive AI that empower young migrants and propose strategies to enhance societal readiness for AI-driven cultural adaptation.
3
Curves Ahead: Enhancing the Steering Law for Complex Curved Trajectories
Jennie J.Y.. Chen (University of British Columbia, Vancouver, British Columbia, Canada)Sidney S. Fels (University of British Columbia, Vancouver, British Columbia, Canada)
The Steering Law has long been a fundamental model in predicting movement time for tasks involving navigating through constrained paths, such as in selecting sub-menu options, particularly for straight and circular arc trajectories. However, this does not reflect the complexities of real-world tasks where curvatures can vary arbitrarily, limiting its applications. This study aims to address this gap by introducing the total curvature parameter K into the equation to account for the overall curviness characteristic of a path. To validate this extension, we conducted a mouse-steering experiment on fixed-width paths with varying lengths and curviness levels. Our results demonstrate that the introduction of K significantly improves model fitness for movement time prediction over traditional models. These findings advance our understanding of movement in complex environments and support potential applications in fields like speech motor control and virtual navigation.
3
BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Mingyu Han (UNIST, Ulsan, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Wireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.
3
TactStyle: Generating Tactile Textures with Generative AI for Digital Fabrication
Faraz Faruqi (MIT CSAIL, Cambridge, Massachusetts, United States)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Jaskaran Singh. Walia (Vellore Institute of Technology, Chennai, India)Yunyi Zhu (MIT CSAIL, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Donald Degraen (University of Canterbury, Christchurch, New Zealand)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
Recent work in Generative AI enables the stylization of 3D models based on image prompts. However, these methods do not incorporate tactile information, leading to designs that lack the expected tactile properties. We present TactStyle, a system that allows creators to stylize 3D models with images while incorporating the expected tactile properties. TactStyle accomplishes this using a modified image-generation model fine-tuned to generate heightfields for given surface textures. By optimizing 3D model surfaces to embody a generated texture, TactStyle creates models that match the desired style and replicate the tactile experience. We utilize a large-scale dataset of textures to train our texture generation model. In a psychophysical experiment, we evaluate the tactile qualities of a set of 3D-printed original textures and TactStyle's generated textures. Our results show that TactStyle successfully generates a wide range of tactile features from a single image input, enabling a novel approach to haptic design.
3
Being in Virtual Worlds: How Interaction, Environment, and Touch Shape Embodiment in Immersive Experiences
John Desnoyers-Stewart (Simon Fraser University, Vancouver, British Columbia, Canada)Alissa N.. Antle (Simon Fraser University, Vancouver, British Columbia, Canada)Bernhard E.. Riecke (Simon Fraser University, Vancouver, British Columbia, Canada)
Embodiment is an everyday experience that typically goes unnoticed. While we often take it for granted, with the adoption of virtual reality (VR) technology, embodiment in virtual bodies and worlds has become an important consideration for designers of immersive experiences. To date, the VR design community has primarily considered embodiment in terms of body ownership over a synchronized visual representation. In this paper, we construct an interactional framework of virtual embodiment, beginning by revisiting what it really means to be “embodied.” Our framework reconnects embodiment and presence in virtual environments founded in Dourish's concept of embodied interaction and Heidegger's Dasein or “being-in-the-world.” We discuss how embodiment, fundamentally rooted in past and present interactions, changes our understanding of body ownership and its extension into VR. Integrating theories from VR research, philosophy, HCI, and psychology we uncover the complex interplay of interaction, environment, and touch in shaping embodied experiences. We present a novel framework for understanding embodiment in VR rooted in interaction, enabling designers to create more immersive and meaningful virtual worlds.
3
Can you pass that tool?: Implications of Indirect Speech in Physical Human-Robot Collaboration
Yan Zhang (University of Melbourne, Melbourne, VIC, Australia)Tharaka Sachintha Ratnayake (University of Melbourne, Melbourne, Australia)Cherie Sew (University of Melbourne, Melbourne, Australia)Jarrod Knibbe (The University of Queensland, St Lucia, QLD, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Wafa Johal (University of Melbourne, Melbourne, VIC, Australia)
Indirect speech acts (ISAs) are a natural pragmatic feature of human communication, allowing requests to be conveyed implicitly while maintaining subtlety and flexibility. Although advancements in speech recognition have enabled natural language interactions with robots through direct, explicit commands—providing clarity in communication—the rise of large language models presents the potential for robots to interpret ISAs. However, empirical evidence on the effects of ISAs on human-robot collaboration (HRC) remains limited. To address this, we conducted a Wizard-of-Oz study (N=36), engaging a participant and a robot in collaborative physical tasks. Our findings indicate that robots capable of understanding ISAs significantly improve human's perceived robot anthropomorphism, team performance, and trust. However, the effectiveness of ISAs is task- and context-dependent, thus requiring careful use. These results highlight the importance of appropriately integrating direct and indirect requests in HRC to enhance collaborative experiences and task performance.
3
ProtoPCB: Reclaiming Printed Circuit Board E-waste as Prototyping Material
Jasmine Lu (University of Chicago, Chicago, Illinois, United States)Sai Rishitha Boddu (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose an interactive tool that enables reusing printed circuit boards (PCB) as prototyping materials to implement new circuits — this extends the utility of PCBs rather than discards them as e-waste. To enable this, our tool takes a user’s desired circuit schematic and analyzes its components and connections to find methods of creating the user’s circuit on discarded PCBs (e.g., e-waste, old prototypes). In our technical evaluation, we utilized our tool across a diverse set of PCBs and input circuits to characterize how often circuits could be implemented on a different board, implemented with minor interventions (trace-cutting or bodge-wiring), or implemented on a combination of multiple boards — demonstrating how our tool assists with exhaustive matching tasks that a user would not likely perform manually. We believe our tool offers: (1) a new approach to prototyping with electronics beyond the limitations of breadboards and (2) a new approach to reducing e-waste during electronics prototyping.
3
``You Go Through So Many Emotions Scrolling Through Instagram'': How Teens Use Instagram To Regulate Their Emotions
Katie Davis (University of Washington, Seattle, Washington, United States)Rotem Landesman (University of Washington, Seattle, Washington, United States)Jina Yoon (University of Washington, Seattle, Washington, United States)JaeWon Kim (University of Washington, Seattle, Washington, United States)Daniela E. Munoz Lopez (University of Washington, Seattle, Washington, United States)Lucia Magis-Weinberg (University of Washington, SEATTLE, Washington, United States)Alexis Hiniker (University of Washington, Seattle, Washington, United States)
Prior work has documented various ways that teens use social media to regulate their emotions. However, little is known about what these processes look like on a moment-by-moment basis. We conducted a diary study to investigate how teens (N=57, Mage = 16.3 years) used Instagram to regulate their emotions. We identified three kinds of emotionally-salient drivers that brought teens to Instagram and two types of behaviors that impacted their emotional experiences on the platform. Teens described going to Instagram to escape, to engage, and to manage the demands of the platform. Once on Instagram, their primary behaviors consisted of mindless diversions and deliberate acts. Although teens reported many positive emotional responses, the variety, unpredictability, and habitual nature of their experiences revealed Instagram to be an unreliable tool for emotion regulation (ER). We present a model of teens’ ER processes on Instagram and offer design considerations for supporting adolescent emotion regulation.
3
Slip Casting as a Machine for Making Textured Ceramic Interfaces
Bo Han (National University of Singapore, Singapore, Singapore)Jared Lim (National University of Singapore, Singapore, Singapore)Kianne Lim (National University of Singapore, Singapore, Singapore)Adam Choo (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Genevieve Ang (Independent Artist, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Ceramics provide a rich domain for exploring craft, fabrication, and diverse material textures that enhance tangible interaction. In this work, we explored slip-casting, a traditional ceramic technique where liquid clay is poured into a porous plaster mold that absorbs water from the slip to form a clay body. We adapted this process into an approach we called Resist Slip-Casting. By selectively masking the mold’s surface with stickers to vary its water absorption rate, our approach enables makers to create ceramic objects with intricate textured surfaces, while also allowing the customization of a single mold for different outcomes. In this paper, we detail the resist slip-casting process and demonstrate its application by crafting a range of tangible interfaces with customizable visual symbols, tactile features, and decorative elements. We further discuss our approach within the broader conversation in HCI on fabrication machines that promote creative collaboration between humans, materials, and tools.