注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

9
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pat Pataranutaporn (Massachusetts Institute of Technology, Boston, Massachusetts, United States)Chayapatr Archiwaranguprok (University of the Thai Chamber of Commerce, Bangkok, Thailand)Samantha W. T.. Chan (MIT Media Lab, Cambridge, Massachusetts, United States)Elizabeth Loftus (UC Irvine, Irvine, California, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
6
"It Brought the Model to Life": Exploring the Embodiment of Multimodal I3Ms for People who are Blind or have Low Vision
Samuel Reinders (Monash University, Melbourne, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
3D-printed models are increasingly used to provide people who are blind or have low vision (BLV) with access to maps, educational materials, and museum exhibits. Recent research has explored interactive 3D-printed models (I3Ms) that integrate touch gestures, conversational dialogue, and haptic vibratory feedback to create more engaging interfaces. Prior research with sighted people has found that imbuing machines with human-like behaviours, i.e., embodying them, can make them appear more lifelike, increasing social perception and presence. Such embodiment can increase engagement and trust. This work presents the first exploration into the design of embodied I3Ms and their impact on BLV engagement and trust. In a controlled study with 12 BLV participants, we found that I3Ms using specific embodiment design factors, such as haptic vibratory and embodied personified voices, led to an increased sense of liveliness and embodiment, as well as engagement, but had mixed impact on trust.
6
Sprayable Sound: Exploring the Experiential and Design Potential of Physically Spraying Sound Interaction
Jongik Jeon (KAIST, Deajeon, Korea, Republic of)Chang Hee Lee (KAIST (Korea Advanced Institute of Science and Technology), Daejoen, Korea, Republic of)
Perfume and fragrance have captivated people for centuries across different cultures. Inspired by the ephemeral nature of sprayable olfactory interactions and experiences, we explore the potential of applying a similar interaction principle to the auditory modality. In this paper, we present SoundMist, a sonic interaction method that enables users to generate ephemeral auditory presences by physically dispersing a liquid into the air, much like the fading phenomenon of fragrance. We conducted a study to understand the experiential factors inherent in sprayable sound interaction and held an ideation workshop to identify potential design spaces or opportunities that this interaction could shape. Our findings, derived from thematic analysis, suggest that physically sprayable sound interaction can induce experiences related to four key factors—materiality of sound produced by dispersed liquid particles, different sounds entangled with each liquid, illusive perception of temporally floating sound, and enjoyment derived from blending different sounds—and can be applied to artistic practices, safety indications, multisensory approaches, and emotional interfaces.
5
Beyond Vacuuming: How Can We Exploit Domestic Robots’ Idle Time?
Yoshiaki Shiokawa (University of Bath, Bath, United Kingdom)Winnie Chen (University of Bath, Bath, United Kingdom)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Jason Alexander (University of Bath, Bath, United Kingdom)Adwait Sharma (University of Bath, Bath, United Kingdom)
We are increasingly adopting domestic robots (e.g., Roomba) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. They typically possess advanced mobility and sensing capabilities, and therefore have significant potential applications beyond their designed use. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles. We conducted two studies: an online survey (n=50) to understand current usage patterns of these robots within homes and an exploratory study (n=12) with HCI and HRI experts. Our thematic analysis revealed 12 key dimensions for developing interactions with domestic robots and outlined over 100 use cases, illustrating how these robots can offer proactive assistance and provide privacy. Finally, we implemented a proof-of-concept prototype to demonstrate the feasibility of reappropriating domestic robots for diverse ubiquitous computing applications.
5
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation
Jessica He (IBM Research, Yorktown Heights, New York, United States)Stephanie Houde (IBM Research, Cambridge, Massachusetts, United States)Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.
5
Toward Affective Empathy via Personalized Analogy Generation: A Case Study on Microaggression
Hyojin Ju (POSTECH, Pohang, Korea, Republic of)Jungeun Lee (POSTECH, Pohang, Korea, Republic of)Seungwon Yang (POSTECH, Pohang-si, Korea, Republic of)Jungseul Ok (POSTECH, Pohang, Korea, Republic of)Inseok Hwang (POSTECH, Pohang, Korea, Republic of)
The importance of empathy cannot be overstated in modern societies where people of diverse backgrounds increasingly interact together. The HCI community has strived to foster affective empathy through immersive technologies. Many previous techniques are built upon a premise that presenting the same experience as-is may help evoke the same emotion, which however faces limitations in matters where the emotional responses largely differ across individuals. In this paper, we present a novel concept of generating a personalized experience based on a large language model (LLM) to facilitate affective empathy between individuals despite their differences. As a case study to showcase its effectiveness, we developed EmoSync, an LLM-based agent that generates personalized analogical microaggression situations, facilitating users to personally resonate with a specific microaggression situation of another person. EmoSync is designed and evaluated along a 3-phased user study with 100+ participants. We comprehensively discuss implications, limitations, and possible applications.
5
"It Brought Me Joy": Opportunities for Spatial Browsing in Desktop Screen Readers
Arnavi Chheda-Kothary (University of Washington, Seattle, Washington, United States)Ather Sharif (University of Washington, Seattle, Washington, United States)David Angel. Rios (Columbia University, New York, New York, United States)Brian A.. Smith (Columbia University, New York, New York, United States)
Blind or low-vision (BLV) screen-reader users have a significantly limited experience interacting with desktop websites compared to non-BLV, i.e., sighted users. This digital divide is exacerbated by the incapability to browse the web spatially—an affordance that leverages spatial reasoning, which sighted users often rely on. In this work, we investigate the value of and opportunities for BLV screen-reader users to browse websites spatially (e.g., understanding page layouts). We additionally explore at-scale website layout understanding as a feature of desktop screen readers. We created a technology probe, WebNExt, to facilitate our investigation. Specifically, we conducted a lab study with eight participants and a five-day field study with four participants to evaluate spatial browsing using WebNExt. Our findings show that participants found spatial browsing intuitive and fulfilling, strengthening their connection to the design of web pages. Furthermore, participants envisioned spatial browsing as a step toward reducing the digital divide.
5
EchoBreath: Continuous Respiratory Behavior Recognition in the Wild via Acoustic Sensing on Smart Glasses
Kaiyi Guo (shanghai jiao tong university, shanghai, China)Qian Zhang (Shanghai Jiao Tong University, Shanghai, China)Dong Wang (Shanghai Jiao Tong University, Shanghai, China)
Monitoring the occurrence count of abnormal respiratory symptoms helps provide critical support for respiratory health. While this is necessary, there is still a lack of an unobtrusive and reliable way that can be effectively used in real-world settings. In this paper, we present EchoBreath, a passive and active acoustic combined sensing system for abnormal respiratory symptoms monitoring. EchoBreath novelly uses the speaker and microphone under the frame of the glasses to emit ultrasonic waves and capture both passive sounds and echo profiles, which can effectively distinguish between subject-aware behaviors and background noise. Furthermore, A lightweight neural network with the 'Null' class and open-set filtering mechanisms substantially improves real-world applicability by eliminating unrelated activity. Our experiments, involving 25 participants, demonstrate that EchoBreath can recognize 6 typical respiratory symptoms in a laboratory setting with an accuracy of 93.1%. Additionally, an in-the-semi-wild study with 10 participants further validates that EchoBreath can continuously monitor respiratory abnormalities under real-world conditions. We believe that EchoBreath can serve as an unobtrusive and reliable way to monitor abnormal respiratory symptoms.
5
Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives
Meredith Ringel. Morris (Google DeepMind, Seattle, Washington, United States)Jed R.. Brubaker (University of Colorado Boulder, Boulder, Colorado, United States)
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services. We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts. We then introduce a novel design space detailing potential implementations of generative ghosts. We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.
5
What Comes After Noticing?: Reflections on Noticing Solar Energy and What Came Next
Angella Mackey (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)David NG. McCallum (Rotterdam University of Applied Science, Rotterdam, Netherlands)Oscar Tomico (Eindhoven University of Technology, Eindhoven, Netherlands)Martijn de Waal (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)
Many design researchers have been exploring what it means to take a more-than-human design approach in their practice. In particular, the technique of “noticing” has been explored as a way of intentionally opening a designer’s awareness to more-than-human worlds. In this paper we present autoethnographic accounts of our own efforts to notice solar energy. Through two studies we reflect on the transformative potential of noticing the more-than-human, and the difficulties in trying to sustain this change in oneself and one’s practice. We propose that noticing can lead to activating exiled capacities within the noticer, relational abilities that lie dormant in each of us. We also propose that emphasising sense-fullness in and through design can be helpful in the face of broader psychological or societal boundaries that block paths towards more relational ways of living with non-humans.
4
SqueezeMe: Creating Soft Inductive Pressure Sensors with Ferromagnetic Elastomers
Thomas Preindl (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Andreas Pointner (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Nimal Jagadeesh Kumar (University of Sussex, Brighton, United Kingdom)Nitzan Cohen (Free University of Bozen-Bolzano, Bolzano, Italy)Niko Münzenrieder (Free University of Bozen Bolzano, Bozen-Bolzano, Italy)Michael Haller (Free University of Bozen-Bolzano, Bolzano, Italy)
We introduce SqueezeMe, a soft and flexible inductive pressure sensor with high sensitivity made from ferromagnetic elastomers for wearable and embedded applications. Constructed with silicone polymers and ferromagnetic particles, this biocompatible sensor responds to pressure and deformation by varying inductance through ferromagnetic particle density changes, enabling precise measurements. We detail the fabrication process and demonstrate how silicones with varying Shore hardness and different ferromagnetic fillers affect the sensor's sensitivity. Applications like weight, air pressure, and pulse measurements showcase the sensor’s versatility for integration into soft robotics and flexible electronics.
4
Exploring Mobile Touch Interaction with Large Language Models
Tim Zindulka (University of Bayreuth, Bayreuth, Germany)Jannek Maximilian. Sekowski (University of Bayreuth, Bayreuth, Germany)Florian Lehmann (University of Bayreuth, Bayreuth, Germany)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)
Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface. In this paper, we propose to control the LLM via touch gestures performed directly on the text. We first chart a design space that covers fundamental touch input and text transformations. In this space, we then concretely explore two control mappings: spread-to-generate and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a user study (N=14) that compares three feedback designs: no visualisation, text length indicator, and length + word indicator. The results demonstrate that touch-based control of LLMs is both feasible and user-friendly, with the length + word indicator proving most effective for managing text generation. This work lays the foundation for further research into gesture-based interaction with LLMs on touch devices.
4
Customizing Emotional Support: How Do Individuals Construct and Interact With LLM-Powered Chatbots
Xi Zheng (City University of Hong Kong, Hong Kong, China)Zhuoyang LI (City University of Hong Kong, Hong Kong, China)Xinning Gui (The Pennsylvania State University, University Park, Pennsylvania, United States)Yuhan Luo (City University of Hong Kong, Hong Kong, China)
Personalized support is essential to fulfill individuals’ emotional needs and sustain their mental well-being. Large language models (LLMs), with great customization flexibility, hold promises to enable individuals to create their own emotional support agents. In this work, we developed ChatLab, where users could construct LLM-powered chatbots with additional interaction features including voices and avatars. Using a Research through Design approach, we conducted a week-long field study followed by interviews and design activities (N = 22), which uncovered how participants created diverse chatbot personas for emotional reliance, confronting stressors, connecting to intellectual discourse, reflecting mirrored selves, etc. We found that participants actively enriched the personas they constructed, shaping the dynamics between themselves and the chatbot to foster open and honest conversations. They also suggested other customizable features, such as integrating online activities and adjustable memory settings. Based on these findings, we discuss opportunities for enhancing personalized emotional support through emerging AI technologies.
4
BIT: Battery-free, IC-less and Wireless Smart Textile Interface and Sensing System
Weiye Xu (Tsinghua University, Beijing, China)Tony Li (Stony Brook University, Stony Brook, New York, United States)Yuntao Wang (Tsinghua University, Beijing, China)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
The development of smart textile interfaces is hindered by the inclusion of rigid hardware components and batteries within the fabric, which pose challenges in terms of manufacturability, usability, and environmental concerns related to electronic waste. To mitigate these issues, we propose a smart textile interface and its wireless sensing system to eliminate the need for ICs, batteries, and connectors embedded into textiles. Our technique is established on the integration of multi-resonant circuits in smart textile interfaces, and utilizing near-field electromagnetic coupling between two coils to facilitate wireless power transfer and data acquisition from smart textile interface.A key aspect of our system is the development of a mathematical model that accurately represents the equivalent circuit of the sensing system. Using this model, we developed a novel algorithm to accurately estimate sensor signals based on changes in system impedance. Through simulation-based experiments and a user study, we demonstrate that our technique effectively supports multiple textile sensors of various types.
4
Understanding and Supporting Peer Review Using AI-reframed Positive Summary
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Alarith Uhde (The University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)
While peer review enhances writing and research quality, harsh feedback can frustrate and demotivate authors. Hence, it is essential to explore how critiques should be delivered to motivate authors and enable them to keep iterating their work. In this study, we explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task, alongside varying levels of overall evaluations (high vs. low), on authors’ feedback reception, revision outcomes, and motivation to revise. Through a 2x2 online experiment with 137 participants, we found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors’ critique acceptance, whereas low overall evaluations of their work led to increased revision efforts. We discuss the implications of using AI in peer feedback, focusing on how AI-driven critiques can influence critique acceptance and support research communities in fostering productive and friendly peer feedback practices.
4
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot Interaction
Jan Leusmann (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Thomas Liang (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Chao Wang (Honda Research Institute Europe, Offenbach/Main, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)
Understanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.
4
SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization
Artem Dementyev (Google Inc., Mountain View, California, United States)Dimitri Kanevsky (Google, Mountain View, California, United States)Samuel Yang (Google, Mountain View, California, United States)Mathieu Parvaix (Google Research, Mountain View, California, United States)Chiong Lai (Google, Mountain View, California, United States)Alex Olwal (Google Inc., Mountain View, California, United States)
Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.
4
How Can Interactive Technology Help Us to Experience Joy With(in) the Forest? Towards a Taxonomy of Tech for Joyful Human-Forest Interactions
Ferran Altarriba Bertran (Tampere University, Tampere, Finland)Oğuz 'Oz' Buruk (Tampere University, Tampere, Finland)Jordi Márquez Puig (Universitat de Girona, Salt, Girona, Spain)Juho Hamari (Tampere University, Tampere, Finland)
This paper presents intermediate-level knowledge in the form of a taxonomy that highlights 12 different ways in which interactive tech might support forest-related experiences that are joyful for humans. It can inspire and provide direction for designs that aim to enrich the experiential texture of forests. The taxonomy stemmed from a reflexive analysis of 104 speculative ideas produced during a year-long co-design process, where we co-experienced and creatively engaged a diverse range forests and forest-related activities with 250+ forest-goers with varied backgrounds and sensitivities. Given that breadth of forests and populations involved, our work foregrounds a rich set of design directions that set an actionable early frame for creating tech that supports joyful human-forest interplays – one that we hope will be extended and consolidated in future research, ours and others'.
4
FlexiVol: a Volumetric Display with an Elastic Diffuser to Enable Reach-Through Interaction
Elodie Bouzbib (Universidad Publica de Navarra, Pamplona, Spain)Iosune Sarasate Azcona (Universidad Pública de Navarra, Pamplona, Spain)Unai Javier Fernández (Universidad Pública de Navarra, Pamplona, Spain)Ivan Fernández (Universidad Pública de Navarra, Pamplona, Navarra, Spain)Manuel Lopez-Amo (Universidad Pública de Navarra, Pamplona, Spain)Iñigo Ezcurdia (Public University of Navarra, Pamplona, Spain)Asier Marzo (Universidad Publica de Navarra, Pamplona, Navarre, Spain)
Volumetric displays render true 3D graphics without forcing users to wear headsets or glasses. However, the optical diffusers that volumetric displays employ are rigid and thus do not allow for direct interaction. FlexiVol employs elastic diffusers to allow users to reach inside the display volume to have direct interaction with true 3D content. We explored various diffuser materials in terms of visual and mechanical properties. We correct the distortions of the volumetric graphics projected on elastic oscillating diffusers and propose a design space for FlexiVol, enabling various gestures and actions through direct interaction techniques. A user study suggests that selection, docking and tracing tasks can be performed faster and more precisely using direct interaction when compared to indirect interaction with a 3D mouse. Finally, applications such as a virtual pet or landscape edition highlight the advantages of a volumetric display that supports direct interaction.
4
Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Jiaji Li (MIT, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Yujia Liu (Tsinghua University, Beijing, China)Emily Guan (Pratt Institute, Brooklyn, New York, United States)Guanyun Wang (Zhejiang University, Hangzhou, China)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
4
Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in Videoconferencing
Carlota Vazquez Gonzalez (King's College London, London, United Kingdom)Timothy Neate (King's College London, London, United Kingdom)Rita Borgo (Kings College London, London, England, United Kingdom)
Videoconferencing is integral to modern work and living. Recently, technologists have sought to leverage data captured -- e.g. from cameras and microphones -- to augment communication. This might mean capturing communication information about verbal (e.g. speech, chat messages), or non-verbal exchanges (e.g. body language, gestures, tone of voice) and using this to mediate -- and potentially improve -- communication. However, such tracking has implications for user experience and raises wider concerns (e.g. privacy). To design tools which account for user needs and preferences, this study investigates perspectives on communication tracking through a global survey and interviews, exploring how daily behaviours and the impact of specific features influence user perspectives. We examine user preferences on non-verbal communication tracking, preferred methods of how this information is conveyed and to whom this should be communicated. Our findings aim to guide the development of non-verbal communication tools which augment videoconferencing that prioritise user needs.
4
Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
Martin Feick (DFKI and Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)Xuxin Tang (Computer Science Department, Blacksburg, Virginia, United States)Raul Garcia-Martin (Universidad Carlos III de Madrid, Leganes, Madrid, Spain)Alexandru Luchianov (MIT CSAIL, Cambridge, Massachusetts, United States)Roderick Wei Xiao. Huang (MIT CSAIL, Cambridge, Massachusetts, United States)Chang Xiao (Adobe Research, San Jose, California, United States)Alexa Siu (Adobe Research, San Jose, California, United States)Mustafa Doga Dogan (Adobe Research, Basel, Switzerland)
Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.
4
InsightBridge: Enhancing Empathizing with Users through Real-Time Information Synthesis and Visual Communication
Junze Li (The Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Yue Zhang (Shenzhen University, Shenzhen, China)Chengbo Zheng (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)Dingdong Liu (The Hong Kong University of Science and Technology, Hong Kong , China)Zeyu Huang (The Hong Kong University of Science and Technology, New Territories, Hong Kong)Xiaojuan Ma (Hong Kong University of Science and Technology, Hong Kong, Hong Kong)
User-centered design necessitates researchers deeply understanding target users throughout the design process. However, during early-stage user interviews, researchers may misinterpret users due to time constraints, incorrect assumptions, and communication barriers. To address this challenge, we introduce InsightBridge, a tool that supports real-time, AI-assisted information synthesis and visual-based verification. InsightBridge automatically organizes relevant information from ongoing interview conversations into an empathy map. It further allows researchers to specify elements to generate visual abstracts depicting the selected information, and then review these visuals with users to refine the visuals as needed. We evaluated the effectiveness of InsightBridge through a within-subject study (N=32) from both the researchers’ and users’ perspectives. Our findings indicate that InsightBridge can assist researchers in note-taking and organization, as well as in-time visual checking, thereby enhancing mutual understanding with users. Additionally, users’ discussions of visuals prompt them to recall overlooked details and scenarios, leading to more insightful ideas.
4
Towards Understanding Interactive Sonic Gastronomy with Chefs and Diners
Hongyue Wang (Monash University, Melbourne, Australia)Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Linjia He (Monash University, Melbourne, Australia)Nathalie Overdevest (Monash University, Clayton, VIC, Australia)Ryan Wee (Monash University, Melbourne, Victoria, Australia)Yan Wang (Monash University, Melbourne, Australia)Phoebe O.. Toups Dugas (Monash University, Melbourne, Australia)Don Samitha Elvitigala (Monash University, Melbourne, Australia)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
With advancements in interactive technologies, research in human-food interaction (HFI) has begun to employ interactive sound to enrich the dining experience. However, chefs' creative use of this sonic interactivity as a new "ingredient" in their culinary practices remains underexplored. In response, we conducted an empirical study with six pairs of chefs and diners utilizing SoniCream, an ice cream cone that plays digital sounds while consuming. Through exploration, creation, collaboration, and reflection, we identified four themes concerning culinary creativity, dining experience, interactive sonic gastronomy deployment, and chef-diner interplay. Building on the discussions at the intersection of these themes, we derived four design implications for creating interactive systems that could support chefs' culinary creativity, thereby enriching dining experiences. Ultimately, our work aims to help interaction designers fully incorporate chefs' perspectives into HFI research.
4
Ego vs. Exo and Active vs. Passive: Investigating the Individual and Combined Effects of Viewpoint and Navigation on Spatial Immersion and Understanding in Immersive Storytelling
Tao Lu (Georgia Institute of Technology, Atlanta, Georgia, United States)Qian Zhu (The Hong Kong University of Science and Technology, Hong Kong, China)Tiffany S. Ma (Georgia Institute of Technology, Atlanta, Georgia, United States)Wong Kam-Kwai (The Hong Kong University of Science and Technology, Hong Kong, China)Anlan Xie (Georgia Institute of Technology, Atlanta, Georgia, United States)Alex Endert (Georgia Institute of Technology, Atlanta, Georgia, United States)Yalong Yang (Georgia Institute of Technology, Atlanta, Georgia, United States)
Visual storytelling combines visuals and narratives to communicate important insights. While web-based visual storytelling is well-established, leveraging the next generation of digital technologies for visual storytelling, specifically immersive technologies, remains underexplored. We investigated the impact of the story viewpoint (from the audience's perspective) and navigation (when progressing through the story) on spatial immersion and understanding. First, we collected web-based 3D stories and elicited design considerations from three VR developers. We then adapted four selected web-based stories to an immersive format. Finally, we conducted a user study (N=24) to examine egocentric and exocentric viewpoints, active and passive navigation, and the combinations they form. Our results indicated significantly higher preferences for egocentric+active (higher agency and engagement) and exocentric+passive (higher focus on content). We also found a marginal significance of viewpoints on story understanding and a strong significance of navigation on spatial immersion.
4
Attracting Fingers with Waves: Potential Fields Using Active Lateral Forces Enhance Touch Interactions
Zhaochong Cai (Delft University of Technology, Delft, Netherlands)David Abbink (Delft University of Technology, Delft, Netherlands)Michael Wiertlewski (Delft University of Technology, Delft, Netherlands)
Touchscreens and touchpads offer intuitive interfaces but provide limited tactile feedback, usually just mechanical vibrations. These devices lack continuous feedback to guide users’ fingers toward specific directions. Recent innovations in surface haptic devices, however, leverage ultrasonic traveling waves to create active lateral forces on a bare fingertip. This paper \revised{investigates the effects and design possibilities of active forces feedback in touch interactions by rendering artificial potential fields on a touchpad.Three user studies revealed that: (1) users perceived attractive and repulsive fields as bumps and holes with similar detection thresholds; (2) step-wise force fields improved targeting by 22.9% compared to friction-only methods; and (3) active force fields effectively communicated directional cues to the users. Several applications were tested, with user feedback favoring this approach for its enhanced tactile experience, added enjoyment, realism, and ease of use.
4
Everything to Gain: Combining Area Cursors with increased Control-Display Gain for Fast and Accurate Touchless Input
Kieran Waugh (University of Glasgow , Glasgow, Scotland, United Kingdom)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Euan Freeman (University of Glasgow, Glasgow, United Kingdom)
Touchless displays often use mid-air gestures to control on-screen cursors for pointer interactions. Area cursors can simplify touchless cursor input by implicitly targeting nearby widgets without the cursor entering the target. However, for displays with dense target layouts, the cursor still has to arrive close to the widget, meaning the benefits of area cursors for time-to-target and effort are diminished. Through two experiments, we demonstrate for the first time that fine-tuning the mapping between hand and cursor movements (control-display gain -- CDG) can address the deficiencies of area cursors and improve the performance of touchless interaction. Across several display sizes and target densities (representative of myriad public displays used in retail, transport, museums, etc), our findings show that the forgiving nature of an area cursor compensates for the imprecision of a high CDG, helping users interact more effectively with smaller and more controlled hand/arm movements.
3
"Grab the Chat and Stick It to My Wall": Understanding How Social VR Streamers Bridge Immersive VR Experiences with Streaming Audiences Outside VR
Yang Hu (Clemson University, Clemson, South Carolina, United States)Guo Freeman (Clemson University, Clemson, South Carolina, United States)Ruchi Panchanadikar (Clemson University, Clemson, South Carolina, United States)
Social VR platforms are increasingly transforming online social spaces by enhancing embodied and immersive social interactions within VR. However, how social VR users also share their activities outside the social VR platform, such as on 2D live streaming platforms, is an increasingly popular yet understudied phenomenon that blends social VR and live streaming research. Through 17 interviews with experienced social VR streamers, we unpack social VR streamers' innovative strategies to further blur the boundary between VR and non-VR spaces to engage their audiences and potential limitations of their strategies. We add new insights into how social VR streamers transcend traditional 2D streamer-audience engagement, which also extend our current understandings of cross-reality interactions. Grounded in these insights, we propose design implications to better support more complicated cross-reality dynamics in social VR streaming while mitigating potential tensions, in hopes of achieving more inclusive, engaging, and secure cross-reality environments in the future.
3
User-defined Co-speech Gesture Design with Swarm Robots
Minh Duc Dang (Simon Fraser University, Burnaby, British Columbia, Canada)Samira Pulatova (Simon Fraser University , Burnaby, British Columbia, Canada)Lawrence H. Kim (Simon Fraser University, Burnaby, British Columbia, Canada)
Non-verbal signals, including co-speech gestures, play a vital role in human communication by conveying nuanced meanings beyond verbal discourse. While researchers have explored co-speech gestures in human-like conversational agents, limited attention has been given to non-humanoid alternatives. In this paper, we propose using swarm robotic systems as conversational agents and introduce a foundational set of swarm-based co-speech gestures, elicited from non-technical users and validated through an online study. This work outlines the key software and hardware requirements to advance research in co-speech gesture generation with swarm robots, contributing to the future development of social robotics and conversational agents.
3
Layered Interactions: Exploring Non-Intrusive Digital Craftsmanship Design Through Lacquer Art Interfaces
Yan Dong (academy of fine arts, Beijing, China)Hanjie Yu (Tsinghua University, Beijing, China)Yanran Chen (Tsinghua University, beijing, Haidian, China)Zipeng Zhang (Tsinghua University, Beijing, China)Wu Qiong (Tsinghua University, Beijing, China)
Integrating technology with the distinctive characteristics of craftsmanship has become a key issue in the field of digital craftsmanship. This paper introduces Layered Interactions, a design approach that seamlessly merges Human-Computer Interaction (HCI) technologies with traditional lacquerware craftsmanship. By leveraging the multi-layer structure and material properties of lacquerware, we embed interactive circuits and integrate programmable hardware within the layers, creating tangible interface that support diverse interactions. This method enhances the adaptability and practicality of traditional crafts in modern digital contexts. Through the development of a lacquerware toolkit, along with user experiments and semi-structured interviews, we demonstrate that this approach not only makes technology more accessible to traditional artisans but also enhances the materiality and emotional qualities of interactive interfaces. Additionally, it fosters mutual learning and collaboration between artisans and technologists. Our research introduces a cross-disciplinary perspective to the HCI community, broadening the material and design possibilities for interactive interfaces.
3
ProtoPCB: Reclaiming Printed Circuit Board E-waste as Prototyping Material
Jasmine Lu (University of Chicago, Chicago, Illinois, United States)Sai Rishitha Boddu (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose an interactive tool that enables reusing printed circuit boards (PCB) as prototyping materials to implement new circuits — this extends the utility of PCBs rather than discards them as e-waste. To enable this, our tool takes a user’s desired circuit schematic and analyzes its components and connections to find methods of creating the user’s circuit on discarded PCBs (e.g., e-waste, old prototypes). In our technical evaluation, we utilized our tool across a diverse set of PCBs and input circuits to characterize how often circuits could be implemented on a different board, implemented with minor interventions (trace-cutting or bodge-wiring), or implemented on a combination of multiple boards — demonstrating how our tool assists with exhaustive matching tasks that a user would not likely perform manually. We believe our tool offers: (1) a new approach to prototyping with electronics beyond the limitations of breadboards and (2) a new approach to reducing e-waste during electronics prototyping.
3
``You Go Through So Many Emotions Scrolling Through Instagram'': How Teens Use Instagram To Regulate Their Emotions
Katie Davis (University of Washington, Seattle, Washington, United States)Rotem Landesman (University of Washington, Seattle, Washington, United States)Jina Yoon (University of Washington, Seattle, Washington, United States)JaeWon Kim (University of Washington, Seattle, Washington, United States)Daniela E. Munoz Lopez (University of Washington, Seattle, Washington, United States)Lucia Magis-Weinberg (University of Washington, SEATTLE, Washington, United States)Alexis Hiniker (University of Washington, Seattle, Washington, United States)
Prior work has documented various ways that teens use social media to regulate their emotions. However, little is known about what these processes look like on a moment-by-moment basis. We conducted a diary study to investigate how teens (N=57, Mage = 16.3 years) used Instagram to regulate their emotions. We identified three kinds of emotionally-salient drivers that brought teens to Instagram and two types of behaviors that impacted their emotional experiences on the platform. Teens described going to Instagram to escape, to engage, and to manage the demands of the platform. Once on Instagram, their primary behaviors consisted of mindless diversions and deliberate acts. Although teens reported many positive emotional responses, the variety, unpredictability, and habitual nature of their experiences revealed Instagram to be an unreliable tool for emotion regulation (ER). We present a model of teens’ ER processes on Instagram and offer design considerations for supporting adolescent emotion regulation.
3
IntelliLining: Activity Sensing through Textile Interlining Sensors Using TENGs
Mahdie Ghane Ezabadi (Simon Fraser University, Burnaby, British Columbia, Canada)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
We introduce a novel component for smart garments: smart interlining, and validate its technical feasibility through a series of experiments. Our work involved the implementation of a prototype that employs a textile vibration sensor based on Triboelectric Nanogenerators (TENGs), commonly used for activity detection. We explore several unique features of smart interlining, including how sensor signals and patterns are influenced by factors such as the size and shape of the interlining sensor, the location of the vibration source within the sensor area, and various propagation media, such as airborne and surface vibrations. We present our study results and discuss how these findings support the feasibility of smart interlining. Additionally, we demonstrate that smart interlinings on a shirt can detect a variety of user activities involving the hand, mouth, and upper body, achieving an accuracy rate of 93.9% in the tested activities.
3
ViFeed: Promoting Slow Eating and Food Awareness through Strategic Video Manipulation during Screen-Based Dining
Yang Chen (National University of Singapore, Singapore, Singapore)Felicia Fang-Yi Tan (National University of Singapore, Singapore, Singapore)Zhuoyu Wang (National University of Singapore, Singapore, Singapore)Xing Liu (Hangzhou Holographic Intelligence Institute, Hangzhou, China)Jiayi Zhang (National University of Singapore, Singapore, Singapore)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Shengdong Zhao (City University of Hong Kong, Hong Kong, China)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
Given the widespread presence of screens during meals, the notion that digital engagement is inherently incompatible with mindfulness. We demonstrate how the strategic design of digital content can enhance two core aspects of mindful eating: slow eating and food awareness. Our research unfolded in three sequential studies: (1). Zoom Eating Study: Contrary to the assumption that video-watching leads to distraction and overeating, this study revealed that subtle video speed manipulations—can promote slower eating (by 15.31%) and controlled food intake (by 9.65%) while maintaining meal satiation and satisfaction. (2). Co-design workshop: Informed the development of ViFeed, a video playback system strategically incorporating subtle speed adjustments and glanceable visual cues. (3). Field Study: A week-long deployment of ViFeed in daily eating demonstrated its efficacy in fostering food awareness, food appreciation, and sustained engagement. By bridging the gap between ideal mindfulness practices and screen-based behaviors, this work offers insights for designing digital-wellbeing interventions that align with, rather than against, existing habits.
3
TutorCraftEase: Enhancing Pedagogical Question Creation with Large Language Models
Wenhui Kang (University of Chinese Academy of Sciences, Beijing, China)Lin Zhang (University of Stuttgart, Stuttgart, Germany)Xiaolan Peng (Institute of software,Chinese Academy of Sciences, Beijing, -Select-, China)Hao Zhang (Chinese Academy of Sciences, Beijing, China)Anchi Li (Beijing University of Technology, Beijing, China)Mengyao Wang (the State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China)Jin Huang (Chinese Academy of Sciences, Beijing, China)Feng Tian (Institute of software, Chinese Academy of Sciences, Beijing, China)Guozhong Dai (Chinese Academy of Sciences, Beijing, China)
Pedagogical questions are crucial for fostering student engagement and learning. In daily teaching, teachers pose hundreds of questions to assess understanding, enhance learning outcomes, and facilitate the transfer of theory-rich content. However, even experienced teachers often struggle to generate a large volume of effective pedagogical questions. To address this, we introduce TutorCraftEase, an interactive generation system that leverages large language models (LLMs) to assist teachers in creating pedagogical questions. TutorCraftEase enables the rapid generation of questions at varying difficulty levels with a single click, while also allowing for manual review and refinement. In a comparative user study with 39 participants, we evaluated TutorCraftEase against a traditional manual authoring tool and a basic LLM tool. The results show that TutorCraftEase can generate pedagogical questions comparable in quality to those created by experienced teachers, while significantly reducing their workload and time.
3
Sonic Delights: Exploring the Design of Food as An Auditory-Gustatory Interface
Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Yinyi Li (Monash University, Melbourne, Victoria, Australia)Hongyue Wang (Monash University, Melbourne, Victoria, Australia)Ziqi Fang (Imperial College London, London, United Kingdom)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
While interest in blending sound with culinary experiences has grown in Human-Food Interaction (HFI), the significance of food’s material properties in shaping sound-related interactions has largely been overlooked. This paper explores the opportunity to enrich the HFI experience by treating food not merely as passive nourishment but as an integral material in computational architecture with input/output capabilities. We introduce “Sonic Delights,” where food is a comestible auditory-gustatory interface to enable users to interact with and consume digital sound. This concept redefines food as a conduit for interactive auditory engagement, shedding light on the untapped multisensory possibilities of merging taste with digital sound. An associated study allowed us to articulate design insights for forthcoming HFI endeavors that seek to weave food into multisensory design, aiming to further the integration of digital interactivity with the culinary arts.
3
Estimating the Effects of Encumbrance and Walking on Mixed Reality Interaction
Tinghui Li (University of Sydney, Sydney, Australia)Eduardo Velloso (University of Sydney, Sydney, New South Wales, Australia)Anusha Withana (The University of Sydney, Sydney, NSW, Australia)Zhanna Sarsenbayeva (University of Sydney, Sydney, Australia)
This paper investigates the effects of two situational impairments---encumbrance (i.e., carrying a heavy object) and walking---on interaction performance in canonical mixed reality tasks. We built Bayesian regression models of movement time, pointing offset, error rate, and throughput for target acquisition task, and throughput, UER, and CER for text entry task to estimate these effects. Our results indicate that 1.0 kg encumbrance increases selection movement time by 28%, decreases text entry throughput by 17%, and increase UER by 50%, but does not affect pointing offset. Walking led to a 63% increase in ray-cast movement time and a 51% reduction in text entry throughput. It also increased selection pointing offset by 16%, ray-cast pointing offset by 17%, and error rate by 8.4%. The interaction effect on 1.0 kg encumbrance and walking resulted in a 112% increase in ray-cast movement time. Our findings enhance the understanding of the effects of encumbrance and walking on mixed reality interaction, and contribute towards accumulating knowledge of situational impairments research in mixed reality.
3
Slip Casting as a Machine for Making Textured Ceramic Interfaces
Bo Han (National University of Singapore, Singapore, Singapore)Jared Lim (National University of Singapore, Singapore, Singapore)Kianne Lim (National University of Singapore, Singapore, Singapore)Adam Choo (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Genevieve Ang (Independent Artist, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Ceramics provide a rich domain for exploring craft, fabrication, and diverse material textures that enhance tangible interaction. In this work, we explored slip-casting, a traditional ceramic technique where liquid clay is poured into a porous plaster mold that absorbs water from the slip to form a clay body. We adapted this process into an approach we called Resist Slip-Casting. By selectively masking the mold’s surface with stickers to vary its water absorption rate, our approach enables makers to create ceramic objects with intricate textured surfaces, while also allowing the customization of a single mold for different outcomes. In this paper, we detail the resist slip-casting process and demonstrate its application by crafting a range of tangible interfaces with customizable visual symbols, tactile features, and decorative elements. We further discuss our approach within the broader conversation in HCI on fabrication machines that promote creative collaboration between humans, materials, and tools.
3
BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Mingyu Han (UNIST, Ulsan, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Wireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.
3
corobos: A Design for Mobile Robots Enabling Cooperative Transitions between Table and Wall Surfaces
Changyo Han (The University of Tokyo, Tokyo, Japan)Yosuke Nakagawa (The University of Tokyo, Tokyo, Japan)Takeshi Naemura (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)
Swarm User Interfaces allow dynamic arrangement of user environments through the use of multiple mobile robots, but their operational range is typically confined to a single plane due to constraints imposed by their two-wheel propulsion systems. We present corobos, a proof-of-concept design that enables these robots to cooperatively transition between table (horizontal) and wall (vertical) surfaces seamlessly, without human intervention. Each robot is equipped with a uniquely designed slope structure that facilitates smooth rotation when another robot pushes it toward a target surface. Notably, this design relies solely on passive mechanical elements, eliminating the need for additional active electrical components. We investigated the design parameters of this structure and evaluated its transition success rate through experiments. Furthermore, we demonstrate various application examples to showcase the potential of corobos in enhancing user environments.
3
PCB Renewal: Iterative Reuse of PCB Substrates for Sustainable Electronic Making
Zeyu Yan (University Of Maryland, College Park, Maryland, United States)Advait Vartak (University of Maryland, College Park, Maryland, United States)Jiasheng Li (University of Maryland, College Park, Maryland, United States)Zining Zhang (University of Maryland, College Park, Maryland, United States)Huaishu Peng (University of Maryland, College Park, Maryland, United States)
PCB (printed circuit board) substrates are often single-use, leading to material waste in electronics making. We introduce PCB Renewal , a novel technique that "erases" and "reconfigures" PCB traces by selectively depositing conductive epoxy onto outdated areas, transforming isolated paths into conductive planes that support new traces. We present the PCB Renewal workflow, evaluate its electrical performance and mechanical durability, and model its sustainability impact, including material usage, cost, energy consumption, and time savings. We develop a software plug-in that guides epoxy deposition, generates updated PCB profiles, and calculates resource usage. To demonstrate PCB Renewal’s effectiveness and versatility, we repurpose a single PCB across four design iterations spanning three projects: a camera roller, a WiFi radio, and an ESPboy game console. We also show how an outsourced double-layer PCB can be reconfigured, transforming it from an LED watch to an interactive cat toy. The paper concludes with limitations and future directions.
3
NightLight: Passively Mapping Nighttime Sidewalk Light Data for Improved Pedestrian Routing
Joseph Breda (University of Washington, Seattle, Washington, United States)Daniel Campos Zamora (University of Washington, Seattle, Washington, United States)Shwetak Patel (University of Washington, Seattle, Washington, United States)Jon E.. Froehlich (University of Washington, Seattle, Washington, United States)
Nighttime sidewalk illumination has a significant and unequal influence on where and whether pedestrians walk at night. Despite the importance of pedestrian lighting, there is currently no approach for measuring and communicating how humans experience nighttime sidewalk light levels at scale. We introduce NightLight, a new sensing approach that leverages the ubiquity of smartphones by re-appropriating the built-in light sensor ---traditionally used to adapt screen brightness---to sense pedestrian nighttime lighting conditions. We validated our technique through in-lab and street-based evaluations characterizing performance across phone orientation, phone model, and varying light levels demonstrating the ability to aggregate and map pedestrian-oriented light levels with unaltered smartphones. Additionally, to examine the impact of light level data on pedestrian route choice, we conducted a qualitative user study with 13 participants using a standard map vs. one with pedestrian lighting data from NightLight Our findings demonstrate that people changed their routes in preference of well-light routes during nighttime walking. Our work has implications for expanding personalized navigation and pedestrian route choice and passive urban sensing.
3
Shape-Kit: A Design Toolkit for Crafting On-Body Expressive Haptics
Ran Zhou (University of Chicago, Chicago, Illinois, United States)Jianru Ding (University of Chicago, Chicago, Illinois, United States)Chenfeng Gao (University of Chicago, Chicago, Illinois, United States)Wanli Qian (University of Chicago, Chicago, Illinois, United States)Benjamin Erickson (University of Colorado Boulder, Boulder, Colorado, United States)Madeline Balaam (KTH Royal Institute of Technology, Stockholm, Sweden)Daniel Leithinger (Cornell University, Ithaca, New York, United States)Ken Nakagaki (University of Chicago, Chicago, Illinois, United States)
Driven by the vision of everyday haptics, the HCI community is advocating for “design touch first” and investigating “how to touch well.” However, a gap remains between the exploratory nature of haptic design and technical reproducibility. We present Shape-Kit, a hybrid design toolkit embodying our “crafting haptics” metaphor, where hand touch is transduced into dynamic pin-based sensations that can be freely explored across the body. An ad-hoc tracking module captures and digitizes these patterns. Our study with 14 designers and artists demonstrates how Shape-Kit facilitates sensorial exploration for expressive haptic design. We analyze how designers collaboratively ideate, prototype, iterate, and compose touch experiences and show the subtlety and richness of touch that can be achieved through diverse crafting methods with Shape-Kit. Reflecting on the findings, our work contributes key insights into haptic toolkit design and touch design practices centered on the “crafting haptics” metaphor. We discuss in-depth how Shape-Kit’s simplicity, though remaining constrained, enables focused crafting for deeper exploration, while its collaborative nature fosters shared sense-making of touch experiences.
3
Virtual Worlds Beyond Sight: Designing and Evaluating an Audio-Haptic System for Non-Visual VR Exploration
Aayush Shrestha (Dalhousie University, Halifax, Nova Scotia, Canada)Joseph Malloch (Dalhousie University, Halifax, Nova Scotia, Canada)
Contemporary research in Virtual Reality (VR) for users who are visually impaired often employs navigation and interaction modalities that are either non-conventional or constrained by physical spaces or both. We designed and examined a hapto-acoustic VR system that mitigates this by enabling non-visual exploration of large virtual environments using white cane simulation and walk-in place locomotion. The system features a complex urban cityscape incorporating a physical cane prototype coupled with a virtual cane for rendering surface textures and an omnidirectional slide mill for navigation. In addition, spatialized audio is rendered based on the progression of sound through the geometry around the user. A study involving twenty sighted participants evaluated the system through three formative tasks while blindfolded to simulate absolute blindness. 19/20 participants successfully completed all the tasks while effectively navigating through the environment. This work highlights the potential for accessible non-visual VR experiences requiring minimal training and limited prior VR exposure.
3
LLM Powered Text Entry Decoding and Flexible Typing on Smartphones
Yan Ma (Stony Brook University, Stony Brook, New York, United States)Dan Zhang (Stony Brook University, New York city, New York, United States)IV Ramakrishnan (Stony Brook University, Stony Brook, New York, United States)Xiaojun Bi (Stony Brook University, Stony Brook, New York, United States)
Large language models (LLMs) have shown exceptional performance in various language-related tasks. However, their application in keyboard decoding, which involves converting input signals (e.g. taps and gestures) into text, remains underexplored. This paper presents a fine-tuned FLAN-T5 model for decoding. It achieves 93.1% top-1 accuracy on user-drawn gestures, outperforming the widely adopted SHARK2 decoder, and 95.4% on real-word tap typing data. In particular, our decoder supports Flexible Typing, allowing users to enter a word with taps, gestures, multi-stroke gestures, and tap-gesture combinations. User study results show that Flexible Typing is beneficial and well-received by participants, where 35.9% of words were entered using word gestures, 29.0% with taps, 6.1% with multi-stroke gestures, and the remaining 29.0% using tap-gestures. Our investigation suggests that the LLM-based decoder improves decoding accuracy over existing word gesture decoders while enabling the Flexible Typing method, which enhances the overall typing experience and accommodates diverse user preferences.
3
FingerGlass: Enhancing Smart Glasses Interaction via Fingerprint Sensing
Zhanwei Xu (Tsinghua University, Beijing, China)Haoxiang Pei (Tsinghua University, Beijing, China)Jianjiang Feng (Tsinghua University, Beijing, China)Jie Zhou (Department of Automation, BNRist, Tsinghua University, Beijing, China)
Smart glasses hold immense potential, but existing input methods often hinder their seamless integration into everyday life. Touchpads integrated into the smart glasses suffer from limited input space and precision; voice commands raise privacy concerns and are contextually constrained; vision-based or IMU-based gesture recognition faces challenges in computational cost or privacy concerns. We present FingerGlass, an interaction technique for smart glasses that leverages side-mounted fingerprint sensors to capture fingerprint images. With a combined CNN and LSTM network, FingerGlass identifies finger identity and recognizes four types of gestures (nine in total): sliding, rolling, rotating, and tapping. These gestures, coupled with finger identification, are mapped to common smart glasses commands, enabling comprehensive and fluid text entry and application control. A user study reveals that FingerGlass represents a promising step towards a fresh, discreet, ergonomic, and efficient input interaction with smart glasses, potentially contributing to their wider adoption and integration into daily life.
3
"It's about Research. It's Not about Language": Understanding and Designing for Mitigating Non-Native English-Speaking Presenters' Challenges in Live Q&A Sessions at Academic Conferences
Lingyuan Li (The University of Texas at Austin, Austin, Texas, United States)Ge Wang (Stanford University, Stanford, California, United States)Guo Freeman (Clemson University, Clemson, South Carolina, United States)
Live Q&A sessions at English-based, international academic conferences usually pose significant challenges for non-native English-speaking presenters, as they demand real-time comprehension and response in one's non-native language under stress. While language-supportive tools (e.g., real-time translation, transcription) can help alleviate such challenges, their adoption remains limited, even at HCI academic conferences that focus on how technology can better serve human needs. Through in-depth interviews with 15 non-native English-speaking academics, we identify their concerns and expectations regarding technological language support for HCI live Q&As. Our research provides critical design implications for future language support tools by highlighting the importance of culturally-aware solutions that offer accurate and seamless language experiences while fostering personal growth and building confidence. We also call for community-wide efforts in HCI to embrace more inclusive practices that actively support non-native English speakers, which can empower all scholars to equally engage in the HCI academic discourse regardless of their native languages.
3
What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems
Vikram Mohanty (Bosch Research North America, Sunnyvale, California, United States)Jude Lim (Independent Researcher, Arlington, Virginia, United States)Kurt Luther (Virginia Tech, Arlington, Virginia, United States)
AI models are constantly evolving, with new versions released frequently. Human-AI interaction guidelines encourage notifying users about changes in model capabilities, ideally supported by thorough benchmarking. However, as AI systems integrate into domain-specific workflows, exhaustive benchmarking can become impractical, often resulting in silent or minimally communicated updates. This raises critical questions: Can users notice these updates? What cues do they rely on to distinguish between models? How do such changes affect their behavior and task performance? We address these questions through two studies in the context of facial recognition for historical photo identification: an online experiment examining users’ ability to detect model updates, followed by a diary study exploring perceptions in a real-world deployment. Our findings highlight challenges in noticing AI model updates, their impact on downstream user behavior and performance, and how they lead users to develop divergent folk theories. Drawing on these insights, we discuss strategies for effectively communicating model updates in AI-infused systems.
3
Co-design & Evaluation of Visual Interventions for Head Posture Correction in Virtual Reality Games
Minh Duc Dang (Simon Fraser University, Burnaby, British Columbia, Canada)Duy Phuoc Luong (Simon Fraser University, Burnaby, British Columbia, Canada)Christopher Napier (Simon Fraser University, Burnaby, British Columbia, Canada)Lawrence H. Kim (Simon Fraser University, Burnaby, British Columbia, Canada)
While virtual reality (VR) games offer immersive experiences, prolonged improper head posture during VR gaming sessions can cause neck discomfort and injuries. To address this issue, we prototyped a framework to detect instances of improper head posture and apply various visual interventions to correct them. After assessing the prototype's usability in a co-design workshop with participants experienced in VR design and kinesiology, we refined the interventions in two main directions --- using explicit visual indicators or employing implicit background changes. The refined interventions were subsequently tested in a controlled experiment involving a target selection task. The study results demonstrate that the interventions effectively helped participants maintain better head posture during VR gameplay compared to the control condition.
3
Voxel Invention Kit: Reconfigurable Building Blocks for Prototyping Interactive Electronic Structures
Miana Smith (MIT, Cambridge, Massachusetts, United States)Jack Forman (MIT, Cambridge, Massachusetts, United States)Amira Abdel Rahman (MIT , Cambridge, Massachusetts, United States)Sophia Wang (MIT, Cambridge, Massachusetts, United States)Neil Gershenfeld (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)
Prototyping large, electronically integrated structures is challenging and often results in unwieldy wiring, weak mechanical properties, expensive iterations, or limited reusability. While many electronics prototyping kits exist for small-scale objects, relatively few methods exist to freely iterate large and sturdy structures with integrated electronics. To address this gap, we present the Voxel Invention Kit (VIK), which uses reconfigurable blocks that assemble into high-stiffness, lightweight structures with integrated electronics. We do this by creating cubic blocks composed of PCBs that carry electrical routing and components and can be (re)configured with simple tools into a variety of structures. To ensure structural stability without expertise, we created a tool to configure structures and simulate applied loads, which we validated with mechanical testing data. Using VIK, we produced devices reconfigured from a shared set of voxels: multiple iterations of a customizable AV lounge seat, a dance floor game, and a force-sensing bridge.