注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

13
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pat Pataranutaporn (Massachusetts Institute of Technology, Boston, Massachusetts, United States)Chayapatr Archiwaranguprok (University of the Thai Chamber of Commerce, Bangkok, Thailand)Samantha W. T.. Chan (MIT Media Lab, Cambridge, Massachusetts, United States)Elizabeth Loftus (UC Irvine, Irvine, California, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
10
Sprayable Sound: Exploring the Experiential and Design Potential of Physically Spraying Sound Interaction
Jongik Jeon (KAIST, Deajeon, Korea, Republic of)Chang Hee Lee (KAIST (Korea Advanced Institute of Science and Technology), Daejoen, Korea, Republic of)
Perfume and fragrance have captivated people for centuries across different cultures. Inspired by the ephemeral nature of sprayable olfactory interactions and experiences, we explore the potential of applying a similar interaction principle to the auditory modality. In this paper, we present SoundMist, a sonic interaction method that enables users to generate ephemeral auditory presences by physically dispersing a liquid into the air, much like the fading phenomenon of fragrance. We conducted a study to understand the experiential factors inherent in sprayable sound interaction and held an ideation workshop to identify potential design spaces or opportunities that this interaction could shape. Our findings, derived from thematic analysis, suggest that physically sprayable sound interaction can induce experiences related to four key factors—materiality of sound produced by dispersed liquid particles, different sounds entangled with each liquid, illusive perception of temporally floating sound, and enjoyment derived from blending different sounds—and can be applied to artistic practices, safety indications, multisensory approaches, and emotional interfaces.
9
EchoBreath: Continuous Respiratory Behavior Recognition in the Wild via Acoustic Sensing on Smart Glasses
Kaiyi Guo (shanghai jiao tong university, shanghai, China)Qian Zhang (Shanghai Jiao Tong University, Shanghai, China)Dong Wang (Shanghai Jiao Tong University, Shanghai, China)
Monitoring the occurrence count of abnormal respiratory symptoms helps provide critical support for respiratory health. While this is necessary, there is still a lack of an unobtrusive and reliable way that can be effectively used in real-world settings. In this paper, we present EchoBreath, a passive and active acoustic combined sensing system for abnormal respiratory symptoms monitoring. EchoBreath novelly uses the speaker and microphone under the frame of the glasses to emit ultrasonic waves and capture both passive sounds and echo profiles, which can effectively distinguish between subject-aware behaviors and background noise. Furthermore, A lightweight neural network with the 'Null' class and open-set filtering mechanisms substantially improves real-world applicability by eliminating unrelated activity. Our experiments, involving 25 participants, demonstrate that EchoBreath can recognize 6 typical respiratory symptoms in a laboratory setting with an accuracy of 93.1%. Additionally, an in-the-semi-wild study with 10 participants further validates that EchoBreath can continuously monitor respiratory abnormalities under real-world conditions. We believe that EchoBreath can serve as an unobtrusive and reliable way to monitor abnormal respiratory symptoms.
9
"It Brought the Model to Life": Exploring the Embodiment of Multimodal I3Ms for People who are Blind or have Low Vision
Samuel Reinders (Monash University, Melbourne, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
3D-printed models are increasingly used to provide people who are blind or have low vision (BLV) with access to maps, educational materials, and museum exhibits. Recent research has explored interactive 3D-printed models (I3Ms) that integrate touch gestures, conversational dialogue, and haptic vibratory feedback to create more engaging interfaces. Prior research with sighted people has found that imbuing machines with human-like behaviours, i.e., embodying them, can make them appear more lifelike, increasing social perception and presence. Such embodiment can increase engagement and trust. This work presents the first exploration into the design of embodied I3Ms and their impact on BLV engagement and trust. In a controlled study with 12 BLV participants, we found that I3Ms using specific embodiment design factors, such as haptic vibratory and embodied personified voices, led to an increased sense of liveliness and embodiment, as well as engagement, but had mixed impact on trust.
8
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation
Jessica He (IBM Research, Yorktown Heights, New York, United States)Stephanie Houde (IBM Research, Cambridge, Massachusetts, United States)Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.
8
What Comes After Noticing?: Reflections on Noticing Solar Energy and What Came Next
Angella Mackey (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)David NG. McCallum (Rotterdam University of Applied Science, Rotterdam, Netherlands)Oscar Tomico (Eindhoven University of Technology, Eindhoven, Netherlands)Martijn de Waal (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)
Many design researchers have been exploring what it means to take a more-than-human design approach in their practice. In particular, the technique of “noticing” has been explored as a way of intentionally opening a designer’s awareness to more-than-human worlds. In this paper we present autoethnographic accounts of our own efforts to notice solar energy. Through two studies we reflect on the transformative potential of noticing the more-than-human, and the difficulties in trying to sustain this change in oneself and one’s practice. We propose that noticing can lead to activating exiled capacities within the noticer, relational abilities that lie dormant in each of us. We also propose that emphasising sense-fullness in and through design can be helpful in the face of broader psychological or societal boundaries that block paths towards more relational ways of living with non-humans.
8
Customizing Emotional Support: How Do Individuals Construct and Interact With LLM-Powered Chatbots
Xi Zheng (City University of Hong Kong, Hong Kong, China)Zhuoyang LI (City University of Hong Kong, Hong Kong, China)Xinning Gui (The Pennsylvania State University, University Park, Pennsylvania, United States)Yuhan Luo (City University of Hong Kong, Hong Kong, China)
Personalized support is essential to fulfill individuals’ emotional needs and sustain their mental well-being. Large language models (LLMs), with great customization flexibility, hold promises to enable individuals to create their own emotional support agents. In this work, we developed ChatLab, where users could construct LLM-powered chatbots with additional interaction features including voices and avatars. Using a Research through Design approach, we conducted a week-long field study followed by interviews and design activities (N = 22), which uncovered how participants created diverse chatbot personas for emotional reliance, confronting stressors, connecting to intellectual discourse, reflecting mirrored selves, etc. We found that participants actively enriched the personas they constructed, shaping the dynamics between themselves and the chatbot to foster open and honest conversations. They also suggested other customizable features, such as integrating online activities and adjustable memory settings. Based on these findings, we discuss opportunities for enhancing personalized emotional support through emerging AI technologies.
7
Beyond Vacuuming: How Can We Exploit Domestic Robots’ Idle Time?
Yoshiaki Shiokawa (University of Bath, Bath, United Kingdom)Winnie Chen (University of Bath, Bath, United Kingdom)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Jason Alexander (University of Bath, Bath, United Kingdom)Adwait Sharma (University of Bath, Bath, United Kingdom)
We are increasingly adopting domestic robots (e.g., Roomba) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. They typically possess advanced mobility and sensing capabilities, and therefore have significant potential applications beyond their designed use. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles. We conducted two studies: an online survey (n=50) to understand current usage patterns of these robots within homes and an exploratory study (n=12) with HCI and HRI experts. Our thematic analysis revealed 12 key dimensions for developing interactions with domestic robots and outlined over 100 use cases, illustrating how these robots can offer proactive assistance and provide privacy. Finally, we implemented a proof-of-concept prototype to demonstrate the feasibility of reappropriating domestic robots for diverse ubiquitous computing applications.
7
Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives
Meredith Ringel. Morris (Google DeepMind, Seattle, Washington, United States)Jed R.. Brubaker (University of Colorado Boulder, Boulder, Colorado, United States)
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services. We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts. We then introduce a novel design space detailing potential implementations of generative ghosts. We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.
7
SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization
Artem Dementyev (Google Inc., Mountain View, California, United States)Dimitri Kanevsky (Google, Mountain View, California, United States)Samuel Yang (Google, Mountain View, California, United States)Mathieu Parvaix (Google Research, Mountain View, California, United States)Chiong Lai (Google, Mountain View, California, United States)Alex Olwal (Google Inc., Mountain View, California, United States)
Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.
6
Toward Affective Empathy via Personalized Analogy Generation: A Case Study on Microaggression
Hyojin Ju (POSTECH, Pohang, Korea, Republic of)Jungeun Lee (POSTECH, Pohang, Korea, Republic of)Seungwon Yang (POSTECH, Pohang-si, Korea, Republic of)Jungseul Ok (POSTECH, Pohang, Korea, Republic of)Inseok Hwang (POSTECH, Pohang, Korea, Republic of)
The importance of empathy cannot be overstated in modern societies where people of diverse backgrounds increasingly interact together. The HCI community has strived to foster affective empathy through immersive technologies. Many previous techniques are built upon a premise that presenting the same experience as-is may help evoke the same emotion, which however faces limitations in matters where the emotional responses largely differ across individuals. In this paper, we present a novel concept of generating a personalized experience based on a large language model (LLM) to facilitate affective empathy between individuals despite their differences. As a case study to showcase its effectiveness, we developed EmoSync, an LLM-based agent that generates personalized analogical microaggression situations, facilitating users to personally resonate with a specific microaggression situation of another person. EmoSync is designed and evaluated along a 3-phased user study with 100+ participants. We comprehensively discuss implications, limitations, and possible applications.
6
Attracting Fingers with Waves: Potential Fields Using Active Lateral Forces Enhance Touch Interactions
Zhaochong Cai (Delft University of Technology, Delft, Netherlands)David Abbink (Delft University of Technology, Delft, Netherlands)Michael Wiertlewski (Delft University of Technology, Delft, Netherlands)
Touchscreens and touchpads offer intuitive interfaces but provide limited tactile feedback, usually just mechanical vibrations. These devices lack continuous feedback to guide users’ fingers toward specific directions. Recent innovations in surface haptic devices, however, leverage ultrasonic traveling waves to create active lateral forces on a bare fingertip. This paper \revised{investigates the effects and design possibilities of active forces feedback in touch interactions by rendering artificial potential fields on a touchpad.Three user studies revealed that: (1) users perceived attractive and repulsive fields as bumps and holes with similar detection thresholds; (2) step-wise force fields improved targeting by 22.9% compared to friction-only methods; and (3) active force fields effectively communicated directional cues to the users. Several applications were tested, with user feedback favoring this approach for its enhanced tactile experience, added enjoyment, realism, and ease of use.
6
"It Brought Me Joy": Opportunities for Spatial Browsing in Desktop Screen Readers
Arnavi Chheda-Kothary (University of Washington, Seattle, Washington, United States)Ather Sharif (University of Washington, Seattle, Washington, United States)David Angel. Rios (Columbia University, New York, New York, United States)Brian A.. Smith (Columbia University, New York, New York, United States)
Blind or low-vision (BLV) screen-reader users have a significantly limited experience interacting with desktop websites compared to non-BLV, i.e., sighted users. This digital divide is exacerbated by the incapability to browse the web spatially—an affordance that leverages spatial reasoning, which sighted users often rely on. In this work, we investigate the value of and opportunities for BLV screen-reader users to browse websites spatially (e.g., understanding page layouts). We additionally explore at-scale website layout understanding as a feature of desktop screen readers. We created a technology probe, WebNExt, to facilitate our investigation. Specifically, we conducted a lab study with eight participants and a five-day field study with four participants to evaluate spatial browsing using WebNExt. Our findings show that participants found spatial browsing intuitive and fulfilling, strengthening their connection to the design of web pages. Furthermore, participants envisioned spatial browsing as a step toward reducing the digital divide.
6
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot Interaction
Jan Leusmann (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Thomas Liang (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Chao Wang (Honda Research Institute Europe, Offenbach/Main, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)
Understanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.
6
Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Jiaji Li (MIT, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Yujia Liu (Tsinghua University, Beijing, China)Emily Guan (Pratt Institute, Brooklyn, New York, United States)Guanyun Wang (Zhejiang University, Hangzhou, China)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
6
Since U Been Gone: Augmenting Context-Aware Transcriptions for Re-Engaging in Immersive VR Meetings
Geonsun Lee (University of Maryland, College Park, Maryland, United States)Yue Yang (Stanford University, Stanford, California, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Dinesh Manocha (University of Maryland , College Park, Maryland, United States)
Maintaining engagement in immersive meetings is challenging, particularly when users must catch up on missed content after disruptions. While transcription interfaces can help, table-fixed panels have the potential to distract users from the group, diminishing social presence, while avatar-fixed captions fail to provide past context. We present EngageSync, a context-aware avatar-fixed transcription interface that adapts based on user engagement, offering live transcriptions and LLM-generated summaries to enhance catching up while preserving social presence. We implemented a live VR meeting setup for a 12-participant formative study and elicited design considerations. In two user studies with small (3 avatars) and mid-sized (7 avatars) groups, EngageSync significantly improved social presence (𝑝 < .05) and time spent gazing at others in the group instead of the interface over table-fixed panels. Also, it reduced re-engagement time and increased information recall (𝑝 < .05) over avatar-fixed interfaces, with stronger effects in mid-sized groups (𝑝 < .01).
6
Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in Videoconferencing
Carlota Vazquez Gonzalez (King's College London, London, United Kingdom)Timothy Neate (King's College London, London, United Kingdom)Rita Borgo (Kings College London, London, England, United Kingdom)
Videoconferencing is integral to modern work and living. Recently, technologists have sought to leverage data captured -- e.g. from cameras and microphones -- to augment communication. This might mean capturing communication information about verbal (e.g. speech, chat messages), or non-verbal exchanges (e.g. body language, gestures, tone of voice) and using this to mediate -- and potentially improve -- communication. However, such tracking has implications for user experience and raises wider concerns (e.g. privacy). To design tools which account for user needs and preferences, this study investigates perspectives on communication tracking through a global survey and interviews, exploring how daily behaviours and the impact of specific features influence user perspectives. We examine user preferences on non-verbal communication tracking, preferred methods of how this information is conveyed and to whom this should be communicated. Our findings aim to guide the development of non-verbal communication tools which augment videoconferencing that prioritise user needs.
5
Sonic Delights: Exploring the Design of Food as An Auditory-Gustatory Interface
Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Yinyi Li (Monash University, Melbourne, Victoria, Australia)Hongyue Wang (Monash University, Melbourne, Victoria, Australia)Ziqi Fang (Imperial College London, London, United Kingdom)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
While interest in blending sound with culinary experiences has grown in Human-Food Interaction (HFI), the significance of food’s material properties in shaping sound-related interactions has largely been overlooked. This paper explores the opportunity to enrich the HFI experience by treating food not merely as passive nourishment but as an integral material in computational architecture with input/output capabilities. We introduce “Sonic Delights,” where food is a comestible auditory-gustatory interface to enable users to interact with and consume digital sound. This concept redefines food as a conduit for interactive auditory engagement, shedding light on the untapped multisensory possibilities of merging taste with digital sound. An associated study allowed us to articulate design insights for forthcoming HFI endeavors that seek to weave food into multisensory design, aiming to further the integration of digital interactivity with the culinary arts.
5
User-defined Co-speech Gesture Design with Swarm Robots
Minh Duc Dang (Simon Fraser University, Burnaby, British Columbia, Canada)Samira Pulatova (Simon Fraser University , Burnaby, British Columbia, Canada)Lawrence H. Kim (Simon Fraser University, Burnaby, British Columbia, Canada)
Non-verbal signals, including co-speech gestures, play a vital role in human communication by conveying nuanced meanings beyond verbal discourse. While researchers have explored co-speech gestures in human-like conversational agents, limited attention has been given to non-humanoid alternatives. In this paper, we propose using swarm robotic systems as conversational agents and introduce a foundational set of swarm-based co-speech gestures, elicited from non-technical users and validated through an online study. This work outlines the key software and hardware requirements to advance research in co-speech gesture generation with swarm robots, contributing to the future development of social robotics and conversational agents.
5
BIT: Battery-free, IC-less and Wireless Smart Textile Interface and Sensing System
Weiye Xu (Tsinghua University, Beijing, China)Tony Li (Stony Brook University, Stony Brook, New York, United States)Yuntao Wang (Tsinghua University, Beijing, China)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
The development of smart textile interfaces is hindered by the inclusion of rigid hardware components and batteries within the fabric, which pose challenges in terms of manufacturability, usability, and environmental concerns related to electronic waste. To mitigate these issues, we propose a smart textile interface and its wireless sensing system to eliminate the need for ICs, batteries, and connectors embedded into textiles. Our technique is established on the integration of multi-resonant circuits in smart textile interfaces, and utilizing near-field electromagnetic coupling between two coils to facilitate wireless power transfer and data acquisition from smart textile interface.A key aspect of our system is the development of a mathematical model that accurately represents the equivalent circuit of the sensing system. Using this model, we developed a novel algorithm to accurately estimate sensor signals based on changes in system impedance. Through simulation-based experiments and a user study, we demonstrate that our technique effectively supports multiple textile sensors of various types.
5
ViFeed: Promoting Slow Eating and Food Awareness through Strategic Video Manipulation during Screen-Based Dining
Yang Chen (National University of Singapore, Singapore, Singapore)Felicia Fang-Yi Tan (National University of Singapore, Singapore, Singapore)Zhuoyu Wang (National University of Singapore, Singapore, Singapore)Xing Liu (Hangzhou Holographic Intelligence Institute, Hangzhou, China)Jiayi Zhang (National University of Singapore, Singapore, Singapore)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Shengdong Zhao (City University of Hong Kong, Hong Kong, China)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
Given the widespread presence of screens during meals, the notion that digital engagement is inherently incompatible with mindfulness. We demonstrate how the strategic design of digital content can enhance two core aspects of mindful eating: slow eating and food awareness. Our research unfolded in three sequential studies: (1). Zoom Eating Study: Contrary to the assumption that video-watching leads to distraction and overeating, this study revealed that subtle video speed manipulations—can promote slower eating (by 15.31%) and controlled food intake (by 9.65%) while maintaining meal satiation and satisfaction. (2). Co-design workshop: Informed the development of ViFeed, a video playback system strategically incorporating subtle speed adjustments and glanceable visual cues. (3). Field Study: A week-long deployment of ViFeed in daily eating demonstrated its efficacy in fostering food awareness, food appreciation, and sustained engagement. By bridging the gap between ideal mindfulness practices and screen-based behaviors, this work offers insights for designing digital-wellbeing interventions that align with, rather than against, existing habits.
5
ProtoPCB: Reclaiming Printed Circuit Board E-waste as Prototyping Material
Jasmine Lu (University of Chicago, Chicago, Illinois, United States)Sai Rishitha Boddu (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose an interactive tool that enables reusing printed circuit boards (PCB) as prototyping materials to implement new circuits — this extends the utility of PCBs rather than discards them as e-waste. To enable this, our tool takes a user’s desired circuit schematic and analyzes its components and connections to find methods of creating the user’s circuit on discarded PCBs (e.g., e-waste, old prototypes). In our technical evaluation, we utilized our tool across a diverse set of PCBs and input circuits to characterize how often circuits could be implemented on a different board, implemented with minor interventions (trace-cutting or bodge-wiring), or implemented on a combination of multiple boards — demonstrating how our tool assists with exhaustive matching tasks that a user would not likely perform manually. We believe our tool offers: (1) a new approach to prototyping with electronics beyond the limitations of breadboards and (2) a new approach to reducing e-waste during electronics prototyping.
5
LLM Powered Text Entry Decoding and Flexible Typing on Smartphones
Yan Ma (Stony Brook University, Stony Brook, New York, United States)Dan Zhang (Stony Brook University, New York city, New York, United States)IV Ramakrishnan (Stony Brook University, Stony Brook, New York, United States)Xiaojun Bi (Stony Brook University, Stony Brook, New York, United States)
Large language models (LLMs) have shown exceptional performance in various language-related tasks. However, their application in keyboard decoding, which involves converting input signals (e.g. taps and gestures) into text, remains underexplored. This paper presents a fine-tuned FLAN-T5 model for decoding. It achieves 93.1% top-1 accuracy on user-drawn gestures, outperforming the widely adopted SHARK2 decoder, and 95.4% on real-word tap typing data. In particular, our decoder supports Flexible Typing, allowing users to enter a word with taps, gestures, multi-stroke gestures, and tap-gesture combinations. User study results show that Flexible Typing is beneficial and well-received by participants, where 35.9% of words were entered using word gestures, 29.0% with taps, 6.1% with multi-stroke gestures, and the remaining 29.0% using tap-gestures. Our investigation suggests that the LLM-based decoder improves decoding accuracy over existing word gesture decoders while enabling the Flexible Typing method, which enhances the overall typing experience and accommodates diverse user preferences.
5
SqueezeMe: Creating Soft Inductive Pressure Sensors with Ferromagnetic Elastomers
Thomas Preindl (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Andreas Pointner (Free University of Bozen-Bolzano, Bozen-Bolzano, Italy)Nimal Jagadeesh Kumar (University of Sussex, Brighton, United Kingdom)Nitzan Cohen (Free University of Bozen-Bolzano, Bolzano, Italy)Niko Münzenrieder (Free University of Bozen Bolzano, Bozen-Bolzano, Italy)Michael Haller (Free University of Bozen-Bolzano, Bolzano, Italy)
We introduce SqueezeMe, a soft and flexible inductive pressure sensor with high sensitivity made from ferromagnetic elastomers for wearable and embedded applications. Constructed with silicone polymers and ferromagnetic particles, this biocompatible sensor responds to pressure and deformation by varying inductance through ferromagnetic particle density changes, enabling precise measurements. We detail the fabrication process and demonstrate how silicones with varying Shore hardness and different ferromagnetic fillers affect the sensor's sensitivity. Applications like weight, air pressure, and pulse measurements showcase the sensor’s versatility for integration into soft robotics and flexible electronics.
5
``You Go Through So Many Emotions Scrolling Through Instagram'': How Teens Use Instagram To Regulate Their Emotions
Katie Davis (University of Washington, Seattle, Washington, United States)Rotem Landesman (University of Washington, Seattle, Washington, United States)Jina Yoon (University of Washington, Seattle, Washington, United States)JaeWon Kim (University of Washington, Seattle, Washington, United States)Daniela E. Munoz Lopez (University of Washington, Seattle, Washington, United States)Lucia Magis-Weinberg (University of Washington, SEATTLE, Washington, United States)Alexis Hiniker (University of Washington, Seattle, Washington, United States)
Prior work has documented various ways that teens use social media to regulate their emotions. However, little is known about what these processes look like on a moment-by-moment basis. We conducted a diary study to investigate how teens (N=57, Mage = 16.3 years) used Instagram to regulate their emotions. We identified three kinds of emotionally-salient drivers that brought teens to Instagram and two types of behaviors that impacted their emotional experiences on the platform. Teens described going to Instagram to escape, to engage, and to manage the demands of the platform. Once on Instagram, their primary behaviors consisted of mindless diversions and deliberate acts. Although teens reported many positive emotional responses, the variety, unpredictability, and habitual nature of their experiences revealed Instagram to be an unreliable tool for emotion regulation (ER). We present a model of teens’ ER processes on Instagram and offer design considerations for supporting adolescent emotion regulation.
5
Virtual Worlds Beyond Sight: Designing and Evaluating an Audio-Haptic System for Non-Visual VR Exploration
Aayush Shrestha (Dalhousie University, Halifax, Nova Scotia, Canada)Joseph Malloch (Dalhousie University, Halifax, Nova Scotia, Canada)
Contemporary research in Virtual Reality (VR) for users who are visually impaired often employs navigation and interaction modalities that are either non-conventional or constrained by physical spaces or both. We designed and examined a hapto-acoustic VR system that mitigates this by enabling non-visual exploration of large virtual environments using white cane simulation and walk-in place locomotion. The system features a complex urban cityscape incorporating a physical cane prototype coupled with a virtual cane for rendering surface textures and an omnidirectional slide mill for navigation. In addition, spatialized audio is rendered based on the progression of sound through the geometry around the user. A study involving twenty sighted participants evaluated the system through three formative tasks while blindfolded to simulate absolute blindness. 19/20 participants successfully completed all the tasks while effectively navigating through the environment. This work highlights the potential for accessible non-visual VR experiences requiring minimal training and limited prior VR exposure.
5
Ego vs. Exo and Active vs. Passive: Investigating the Individual and Combined Effects of Viewpoint and Navigation on Spatial Immersion and Understanding in Immersive Storytelling
Tao Lu (Georgia Institute of Technology, Atlanta, Georgia, United States)Qian Zhu (The Hong Kong University of Science and Technology, Hong Kong, China)Tiffany S. Ma (Georgia Institute of Technology, Atlanta, Georgia, United States)Wong Kam-Kwai (The Hong Kong University of Science and Technology, Hong Kong, China)Anlan Xie (Georgia Institute of Technology, Atlanta, Georgia, United States)Alex Endert (Georgia Institute of Technology, Atlanta, Georgia, United States)Yalong Yang (Georgia Institute of Technology, Atlanta, Georgia, United States)
Visual storytelling combines visuals and narratives to communicate important insights. While web-based visual storytelling is well-established, leveraging the next generation of digital technologies for visual storytelling, specifically immersive technologies, remains underexplored. We investigated the impact of the story viewpoint (from the audience's perspective) and navigation (when progressing through the story) on spatial immersion and understanding. First, we collected web-based 3D stories and elicited design considerations from three VR developers. We then adapted four selected web-based stories to an immersive format. Finally, we conducted a user study (N=24) to examine egocentric and exocentric viewpoints, active and passive navigation, and the combinations they form. Our results indicated significantly higher preferences for egocentric+active (higher agency and engagement) and exocentric+passive (higher focus on content). We also found a marginal significance of viewpoints on story understanding and a strong significance of navigation on spatial immersion.
5
BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Mingyu Han (UNIST, Ulsan, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Wireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.
5
iGripper: A Semi-Active Handheld Haptic VR Controller Based on Variable Stiffness Mechanism
Ke Shi (Southeast University, Nanjing, China)Tongshu Chen (Southeast University, Nanjing, China)Yichen Xiang (Southeast University, Nanjing, China)Ye Li (Southeast University, Nanjing, Jiangsu, China)Lifeng Zhu (Southeast University, Nanjing, Jiangsu, China)Aiguo Song (Southeast University, Nanjing, Jiangsu, China)
We introduce iGripper, a handheld haptic controller designed to render stiffness feedback for gripping and clamping both rigid and elastic objects in virtual reality. iGripper directly adjusts physical stiffness by using a small linear actuator to modify the spring’s position along a lever arm, with feedback force generated by the spring's reaction to the user's input. This enables iGripper to render stiffness from zero to any specified value, determined by the spring's inherent stiffness. Additionally, a blocking mechanism is designed to provide fully rigid feedback to enlarge the rendering range. Compared to active controllers, iGripper offers a broad range of force and stiffness feedback without requiring high-power actuators. Unlike many passive controllers, which provide only braking force, iGripper, as a semi-active controller, delivers controllable elastic force feedback. We present the iGripper’s design, performance evaluation, and user studies, comparing its realism with a commercial impedance-type grip device.
5
TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction
Guanyun Wang (Zhejiang University, Hangzhou, China)Chuang Chen (Zhejiang University, HangZhou, China)Xiao Jin (Imperial College London, London, United Kingdom)Yulu Chen (University College London, London, United Kingdom)Yangweizhe Zheng (Northeast Forestry University, Harbin, China)Qianzi Zhen (Zhejiang University, HangZhou, China)Yang Zhang (Imperial College London, London, United Kingdom)Jiaji Li (MIT, Cambridge, Massachusetts, United States)Yue Yang (Zhejiang University, Hangzhou, China)Ye Tao (Hangzhou City University, Hangzhou, China)Shijian Luo (Zhejiang University, Hangzhou, Zhejiang, China)Lingyun Sun (Zhejiang University, Hangzhou, China)
Wood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.
5
"Grab the Chat and Stick It to My Wall": Understanding How Social VR Streamers Bridge Immersive VR Experiences with Streaming Audiences Outside VR
Yang Hu (Clemson University, Clemson, South Carolina, United States)Guo Freeman (Clemson University, Clemson, South Carolina, United States)Ruchi Panchanadikar (Clemson University, Clemson, South Carolina, United States)
Social VR platforms are increasingly transforming online social spaces by enhancing embodied and immersive social interactions within VR. However, how social VR users also share their activities outside the social VR platform, such as on 2D live streaming platforms, is an increasingly popular yet understudied phenomenon that blends social VR and live streaming research. Through 17 interviews with experienced social VR streamers, we unpack social VR streamers' innovative strategies to further blur the boundary between VR and non-VR spaces to engage their audiences and potential limitations of their strategies. We add new insights into how social VR streamers transcend traditional 2D streamer-audience engagement, which also extend our current understandings of cross-reality interactions. Grounded in these insights, we propose design implications to better support more complicated cross-reality dynamics in social VR streaming while mitigating potential tensions, in hopes of achieving more inclusive, engaging, and secure cross-reality environments in the future.
5
PCB Renewal: Iterative Reuse of PCB Substrates for Sustainable Electronic Making
Zeyu Yan (University Of Maryland, College Park, Maryland, United States)Advait Vartak (University of Maryland, College Park, Maryland, United States)Jiasheng Li (University of Maryland, College Park, Maryland, United States)Zining Zhang (University of Maryland, College Park, Maryland, United States)Huaishu Peng (University of Maryland, College Park, Maryland, United States)
PCB (printed circuit board) substrates are often single-use, leading to material waste in electronics making. We introduce PCB Renewal , a novel technique that "erases" and "reconfigures" PCB traces by selectively depositing conductive epoxy onto outdated areas, transforming isolated paths into conductive planes that support new traces. We present the PCB Renewal workflow, evaluate its electrical performance and mechanical durability, and model its sustainability impact, including material usage, cost, energy consumption, and time savings. We develop a software plug-in that guides epoxy deposition, generates updated PCB profiles, and calculates resource usage. To demonstrate PCB Renewal’s effectiveness and versatility, we repurpose a single PCB across four design iterations spanning three projects: a camera roller, a WiFi radio, and an ESPboy game console. We also show how an outsourced double-layer PCB can be reconfigured, transforming it from an LED watch to an interactive cat toy. The paper concludes with limitations and future directions.
5
Everything to Gain: Combining Area Cursors with increased Control-Display Gain for Fast and Accurate Touchless Input
Kieran Waugh (University of Glasgow , Glasgow, Scotland, United Kingdom)Mark McGill (University of Glasgow, Glasgow, Lanarkshire, United Kingdom)Euan Freeman (University of Glasgow, Glasgow, United Kingdom)
Touchless displays often use mid-air gestures to control on-screen cursors for pointer interactions. Area cursors can simplify touchless cursor input by implicitly targeting nearby widgets without the cursor entering the target. However, for displays with dense target layouts, the cursor still has to arrive close to the widget, meaning the benefits of area cursors for time-to-target and effort are diminished. Through two experiments, we demonstrate for the first time that fine-tuning the mapping between hand and cursor movements (control-display gain -- CDG) can address the deficiencies of area cursors and improve the performance of touchless interaction. Across several display sizes and target densities (representative of myriad public displays used in retail, transport, museums, etc), our findings show that the forgiving nature of an area cursor compensates for the imprecision of a high CDG, helping users interact more effectively with smaller and more controlled hand/arm movements.
5
Exploring Mobile Touch Interaction with Large Language Models
Tim Zindulka (University of Bayreuth, Bayreuth, Germany)Jannek Maximilian. Sekowski (University of Bayreuth, Bayreuth, Germany)Florian Lehmann (University of Bayreuth, Bayreuth, Germany)Daniel Buschek (University of Bayreuth, Bayreuth, Germany)
Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface. In this paper, we propose to control the LLM via touch gestures performed directly on the text. We first chart a design space that covers fundamental touch input and text transformations. In this space, we then concretely explore two control mappings: spread-to-generate and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a user study (N=14) that compares three feedback designs: no visualisation, text length indicator, and length + word indicator. The results demonstrate that touch-based control of LLMs is both feasible and user-friendly, with the length + word indicator proving most effective for managing text generation. This work lays the foundation for further research into gesture-based interaction with LLMs on touch devices.
5
How Can Interactive Technology Help Us to Experience Joy With(in) the Forest? Towards a Taxonomy of Tech for Joyful Human-Forest Interactions
Ferran Altarriba Bertran (Tampere University, Tampere, Finland)Oğuz 'Oz' Buruk (Tampere University, Tampere, Finland)Jordi Márquez Puig (Universitat de Girona, Salt, Girona, Spain)Juho Hamari (Tampere University, Tampere, Finland)
This paper presents intermediate-level knowledge in the form of a taxonomy that highlights 12 different ways in which interactive tech might support forest-related experiences that are joyful for humans. It can inspire and provide direction for designs that aim to enrich the experiential texture of forests. The taxonomy stemmed from a reflexive analysis of 104 speculative ideas produced during a year-long co-design process, where we co-experienced and creatively engaged a diverse range forests and forest-related activities with 250+ forest-goers with varied backgrounds and sensitivities. Given that breadth of forests and populations involved, our work foregrounds a rich set of design directions that set an actionable early frame for creating tech that supports joyful human-forest interplays – one that we hope will be extended and consolidated in future research, ours and others'.
5
FlexEar-Tips: Shape-Adjustable Ear Tips Using Pressure Control
Takashi Amesaka (Keio University, Yokohama, Japan)Takumi Yamamoto (Keio University, Yokohama, Japan)Hiroki Watanabe (Future University Hakodate, Hakodate, Japan)Buntarou Shizuki (University of Tsukuba, Tsukuba, Ibaraki, Japan)Yuta Sugiura (Keio University, Yokohama, Japan)
We introduce FlexEar-Tips, a dynamic ear tip system designed for the next-generation hearables. The ear tips are controlled by an air pump and solenoid valves, enabling size adjustments for comfort and functionality. FlexEar-Tips includes an air pressure sensor to monitor ear tip size, allowing it to adapt to environmental conditions and user needs. In the evaluation, we conducted a preliminary investigation of the size control accuracy and the minimum amount of variability of haptic perception in the user's ear. We then evaluated the user's ability to identify patterns in the haptic notification system, the impact on the music listening experience, the relationship between the size of the ear tips and the sound localization ability, and the impact on the reduction of humidity in the ear using a model. We proposed new interaction modalities for adaptive hearables and discussed health monitoring, immersive auditory experiences, haptics notifications, biofeedback, and sensing.
4
IntelliLining: Activity Sensing through Textile Interlining Sensors Using TENGs
Mahdie Ghane Ezabadi (Simon Fraser University, Burnaby, British Columbia, Canada)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Xing-Dong Yang (Simon Fraser University, Burnaby, British Columbia, Canada)Te-Yen Wu (Florida State University, Tallahassee, Florida, United States)
We introduce a novel component for smart garments: smart interlining, and validate its technical feasibility through a series of experiments. Our work involved the implementation of a prototype that employs a textile vibration sensor based on Triboelectric Nanogenerators (TENGs), commonly used for activity detection. We explore several unique features of smart interlining, including how sensor signals and patterns are influenced by factors such as the size and shape of the interlining sensor, the location of the vibration source within the sensor area, and various propagation media, such as airborne and surface vibrations. We present our study results and discuss how these findings support the feasibility of smart interlining. Additionally, we demonstrate that smart interlinings on a shirt can detect a variety of user activities involving the hand, mouth, and upper body, achieving an accuracy rate of 93.9% in the tested activities.
4
Wearable Material Properties: Passive Wearable Microstructures as Adaptable Interfaces for the Physical Environment
Yuyu Lin (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Hatice Gokcen Guner (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Jianzhe Gu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Sonia Prashant (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Alexandra Ion (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Users interact with static objects daily, but their preferences and needs may vary. Making the objects dynamic or adaptable requires updating all objects. Instead, we propose a novel wearable interface that empowers users to adjust perceived material properties. To explore such wearable interfaces, we design unit cell structures that can be tiled to create surfaces with switchable properties. Each unit can be switched between two states while worn, through an integrated bistable spring and tendon-driven trigger mechanism. Our switchable properties include stiffness, height, shape, texture, and their combinations. Our wearable material interfaces are passive, 3D printed, and personalizable. We present a design tool to support users in designing their customized wearable material properties. We demonstrate several example prototypes, e.g., a sleeve allowing users to adapt to how different surfaces feel, a shoe sole for users walking on different ground conditions, a prototype supporting both pillow and protective helmet properties, or a collar that can be transformed into a neck pillow with variable support.
4
FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors
Ruonan Zheng (Xiamen University, Xiamen, China)Jiawei Fang (Xiamen University, Xiamen, China)Yuan Yao (School of Informatics, Xiamen University, Xiamen, Fujian, China)Xiaoxia Gao (Xiamen University, Xiamen, Fujian, China)Chengxu Zuo (school of imformatics, Xiamen, Fujian, China)Shihui Guo (Software School, Xiamen, Fujian, China)Yiyue Luo (University of Washington, Seattle, Washington, United States)
What if our clothes could capture our body motion accurately? This paper introduces Flexible Inertial Poser (FIP), a novel motion-capturing system using daily garments with two elbow-attached flex sensors and four Inertial Measurement Units (IMUs). To address the inevitable sensor displacements in loose wearables which degrade joint tracking accuracy significantly, we identify the distinct characteristics of the flex and inertial sensor displacements and develop a Displacement Latent Diffusion Model and a Physics-informed Calibrator to compensate for sensor displacements based on such observations, resulting in a substantial improvement in motion capture accuracy. We also introduce a Pose Fusion Predictor to enhance multimodal sensor fusion. Extensive experiments demonstrate that our method achieves robust performance across varying body shapes and motions, significantly outperforming SOTA IMU approaches with a 19.5% improvement in angular error, a 26.4% improvement in elbow angular error, and a 30.1% improvement in positional error. FIP opens up opportunities for ubiquitous human-computer interactions and diverse interactive applications such as Metaverse, rehabilitation, and fitness analysis. Our project page can be seen at https://fangjw-0722.github.io/FIP.github.io/
4
Estimating the Effects of Encumbrance and Walking on Mixed Reality Interaction
Tinghui Li (University of Sydney, Sydney, Australia)Eduardo Velloso (University of Sydney, Sydney, New South Wales, Australia)Anusha Withana (The University of Sydney, Sydney, NSW, Australia)Zhanna Sarsenbayeva (University of Sydney, Sydney, Australia)
This paper investigates the effects of two situational impairments---encumbrance (i.e., carrying a heavy object) and walking---on interaction performance in canonical mixed reality tasks. We built Bayesian regression models of movement time, pointing offset, error rate, and throughput for target acquisition task, and throughput, UER, and CER for text entry task to estimate these effects. Our results indicate that 1.0 kg encumbrance increases selection movement time by 28%, decreases text entry throughput by 17%, and increase UER by 50%, but does not affect pointing offset. Walking led to a 63% increase in ray-cast movement time and a 51% reduction in text entry throughput. It also increased selection pointing offset by 16%, ray-cast pointing offset by 17%, and error rate by 8.4%. The interaction effect on 1.0 kg encumbrance and walking resulted in a 112% increase in ray-cast movement time. Our findings enhance the understanding of the effects of encumbrance and walking on mixed reality interaction, and contribute towards accumulating knowledge of situational impairments research in mixed reality.
4
Can you pass that tool?: Implications of Indirect Speech in Physical Human-Robot Collaboration
Yan Zhang (University of Melbourne, Melbourne, VIC, Australia)Tharaka Sachintha Ratnayake (University of Melbourne, Melbourne, Australia)Cherie Sew (University of Melbourne, Melbourne, Australia)Jarrod Knibbe (The University of Queensland, St Lucia, QLD, Australia)Jorge Goncalves (University of Melbourne, Melbourne, Australia)Wafa Johal (University of Melbourne, Melbourne, VIC, Australia)
Indirect speech acts (ISAs) are a natural pragmatic feature of human communication, allowing requests to be conveyed implicitly while maintaining subtlety and flexibility. Although advancements in speech recognition have enabled natural language interactions with robots through direct, explicit commands—providing clarity in communication—the rise of large language models presents the potential for robots to interpret ISAs. However, empirical evidence on the effects of ISAs on human-robot collaboration (HRC) remains limited. To address this, we conducted a Wizard-of-Oz study (N=36), engaging a participant and a robot in collaborative physical tasks. Our findings indicate that robots capable of understanding ISAs significantly improve human's perceived robot anthropomorphism, team performance, and trust. However, the effectiveness of ISAs is task- and context-dependent, thus requiring careful use. These results highlight the importance of appropriately integrating direct and indirect requests in HRC to enhance collaborative experiences and task performance.
4
Understanding and Supporting Peer Review Using AI-reframed Positive Summary
Chi-Lan Yang (The University of Tokyo, Tokyo, Japan)Alarith Uhde (The University of Tokyo, Tokyo, Japan)Naomi Yamashita (NTT, Keihanna, Japan)Hideaki Kuzuoka (The University of Tokyo, Bunkyo-ku, Tokyo, Japan)
While peer review enhances writing and research quality, harsh feedback can frustrate and demotivate authors. Hence, it is essential to explore how critiques should be delivered to motivate authors and enable them to keep iterating their work. In this study, we explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task, alongside varying levels of overall evaluations (high vs. low), on authors’ feedback reception, revision outcomes, and motivation to revise. Through a 2x2 online experiment with 137 participants, we found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors’ critique acceptance, whereas low overall evaluations of their work led to increased revision efforts. We discuss the implications of using AI in peer feedback, focusing on how AI-driven critiques can influence critique acceptance and support research communities in fostering productive and friendly peer feedback practices.
4
Creating Furniture-Scale Deployable Objects with a Computer-Controlled Sewing Machine
Sapna Tayal (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lea Albaugh (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)James McCann (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Scott E. Hudson (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
We introduce a novel method for fabricating functional flat-to-shape objects using a large computer-controlled sewing machine (11 ft / 3.4m wide), a process that is both rapid and scalable beyond the machine's sewable area. Flat-to-shape deployable objects can allow for quick and easy need-based activation, but the selective flexibility required can involve complex fabrication or tedious assembly. In our method, we sandwich rigid form-defining materials, such as plywood and acrylic, between layers of fabric. The sewing process secures these layers together, creating soft hinges between the rigid inserts which allow the object to transition smoothly into its three-dimensional functional form with little post-processing.
4
Cross, Dwell, or Pinch: Designing and Evaluating Around-Device Selection Methods for Unmodified Smartwatches
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Jiwan Son (KAIST, Daejeon, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Smartwatches offer powerful features, but their small touchscreens limit the expressiveness of the input that can be achieved. To address this issue, we present, and open-source, the first sonar-based around-device input on an unmodified consumer smartwatch. We achieve this using a fine-grained, one-dimensional sonar-based finger-tracking system. In addition, we use this system to investigate the fundamental issue of how to trigger selections during around-device smartwatch input through two studies. The first examines the methods of double-crossing, dwell, and finger tap in a binary task, while the second considers a subset of these designs in a multi-target task and in the presence and absence of haptic feedback. Results showed double-crossing was optimal for binary tasks, while dwell excelled in multi-target scenarios, and haptic feedback enhanced comfort but not performance. These findings offer design insights for future around-device smartwatch interfaces that can be directly deployed on today’s consumer hardware.
4
"It's about Research. It's Not about Language": Understanding and Designing for Mitigating Non-Native English-Speaking Presenters' Challenges in Live Q&A Sessions at Academic Conferences
Lingyuan Li (The University of Texas at Austin, Austin, Texas, United States)Ge Wang (Stanford University, Stanford, California, United States)Guo Freeman (Clemson University, Clemson, South Carolina, United States)
Live Q&A sessions at English-based, international academic conferences usually pose significant challenges for non-native English-speaking presenters, as they demand real-time comprehension and response in one's non-native language under stress. While language-supportive tools (e.g., real-time translation, transcription) can help alleviate such challenges, their adoption remains limited, even at HCI academic conferences that focus on how technology can better serve human needs. Through in-depth interviews with 15 non-native English-speaking academics, we identify their concerns and expectations regarding technological language support for HCI live Q&As. Our research provides critical design implications for future language support tools by highlighting the importance of culturally-aware solutions that offer accurate and seamless language experiences while fostering personal growth and building confidence. We also call for community-wide efforts in HCI to embrace more inclusive practices that actively support non-native English speakers, which can empower all scholars to equally engage in the HCI academic discourse regardless of their native languages.
4
From Alien to Ally: Exploring Non-Verbal Communication with Non-Anthropomorphic Avatars in a Collaborative Escape-Room
Federico Espositi (Politecnico di Milano, Milan, -Select-, Italy)Maurizio Vetere (Politecnico di Milano, Milan, Italy)Andrea Bonarini (Politecnico di Milano, Milan, Italy)
Despite the spread of technologies in the physical world and the normalization of virtual experiences, non-verbal communication with radically non-anthropomorphic avatars remains an underexplored frontier. We present an interaction system in which two participants must learn to communicate with each other non-verbally through a digital filter that morphs their appearance. In a collaborative escape room, the Visitor must teach a non-anthropomorphic physical robot to play, while the Controller, in a different location, embodies the robot with an altered perception of the environment and the Visitor’s companion in VR. This study addresses the design of the activity, the robot, and the virtual environment, with a focus on how the Visitor’s morphology is translated in VR. Results show that participants were able to develop emergent and effective communication strategies, with the Controller naturally embodying its avatar’s narrative, making this system a promising testbed for future research on human-technology interaction, entertainment, and embodiment.
4
MotionBlocks: Modular Geometric Motion Remapping for More Accessible Upper Body Movement in Virtual Reality
Johann Wentzel (University of Waterloo, Waterloo, Ontario, Canada)Alessandra Luz (University of Waterloo, Waterloo, Ontario, Canada)Martez E. Mott (Microsoft Research, Redmond, Washington, United States)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Movement-based spatial interaction in VR can present significant challenges for people with limited mobility, particularly due to the mismatch between the upper body motion a VR app requires and the user's capabilities. We describe MotionBlocks, an approach which enables 3D spatial input with smaller motions or simpler input devices using modular geometric motion remapping. A formative study identifies common accessibility issues within VR motion design, and informs a design language of VR motions that fall within simple geometric primitives. These 3D primitives enable collapsing spatial or non-spatial input into a normalized input vector, which is then expanded into a second 3D primitive representing larger, more complex 3D motions. An evaluation with people with mobility limitations found that using geometric primitives for highly customized upper body input remapping reduced physical workload, temporal workload, and perceived effort.
4
Slip Casting as a Machine for Making Textured Ceramic Interfaces
Bo Han (National University of Singapore, Singapore, Singapore)Jared Lim (National University of Singapore, Singapore, Singapore)Kianne Lim (National University of Singapore, Singapore, Singapore)Adam Choo (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)Genevieve Ang (Independent Artist, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)
Ceramics provide a rich domain for exploring craft, fabrication, and diverse material textures that enhance tangible interaction. In this work, we explored slip-casting, a traditional ceramic technique where liquid clay is poured into a porous plaster mold that absorbs water from the slip to form a clay body. We adapted this process into an approach we called Resist Slip-Casting. By selectively masking the mold’s surface with stickers to vary its water absorption rate, our approach enables makers to create ceramic objects with intricate textured surfaces, while also allowing the customization of a single mold for different outcomes. In this paper, we detail the resist slip-casting process and demonstrate its application by crafting a range of tangible interfaces with customizable visual symbols, tactile features, and decorative elements. We further discuss our approach within the broader conversation in HCI on fabrication machines that promote creative collaboration between humans, materials, and tools.
4
Wordplay: Accessible, Multilingual, Interactive Typography
Amy J. Ko (University of Washington, Seattle, Washington, United States)Carlos Aldana Lira (Middle Tennessee State University, Murfreesboro, Tennessee, United States)Isabel Amaya (University of Washington, Seattle, Washington, United States)
Educational programming languages (EPLs) are rarely designed to be both accessible and multilingual. We describe a 30-month community-engaged case study to surface design challenges at this intersection, creating Wordplay, an accessible, multilingual platform for youth to program interactive typography. Wordplay combines functional programming, multilingual text, multimodal editors, time travel debugging, and teacher- and youth-centered community governance. Across five 2-hour focus group sessions, a group of 6 multilingual students and teachers affirmed many of the platform’s design choices, but reinforced that design at the margins was unfinished, including support for limited internet access, decade-old devices, and high turnover of device use by students with different access, language, and attentional needs. The group also highlighted open source platforms like GitHub as unsuitable for engaging youth. These findings suggest that EPLs that are both accessible and language-inclusive are feasible, but that there remain many design tensions between language design, learnability, accessibility, culture, and governance.
4
Dreamcrafter: Immersive Editing of 3D Radiance Fields Through Flexible, Generative Inputs and Outputs
Cyrus Vachha (University of California, Berkeley, Berkeley, California, United States)Yixiao Kang (University of California, Berkeley, Berkeley, California, United States)Zach Dive (University of California, Berkeley, Berkeley, California, United States)Ashwat Chidambaram (University of California, Berkeley, Berkeley, California, United States)Anik Gupta (University of California, Berkeley, Berkeley, California, United States)Eunice Jun (University of California, Los Angeles, Los Angeles, California, United States)Bjoern Hartmann (UC Berkeley, Berkeley, California, United States)
Authoring 3D scenes is a central task for spatial computing applications. Competing visions for lowering existing barriers are (1) focus on immersive, direct manipulation of 3D content or (2) leverage AI techniques that capture real scenes (3D Radiance Fields such as, NeRFs, 3D Gaussian Splatting) and modify them at a higher level of abstraction, at the cost of high latency. We unify the complementary strengths of these approaches and investigate how to integrate generative AI advances into real-time, immersive 3D Radiance Field editing. We introduce Dreamcrafter, a VR-based 3D scene editing system that: (1) provides a modular architecture to integrate generative AI algorithms; (2) combines different levels of control for creating objects, including natural language and direct manipulation; and (3) introduces proxy representations that support interaction during high-latency operations. We contribute empirical findings on control preferences and discuss how generative AI interfaces beyond text input enhance creativity in scene editing and world building.