注目の論文一覧

各カテゴリ上位30論文までを表示しています

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

4
Sprayable Sound: Exploring the Experiential and Design Potential of Physically Spraying Sound Interaction
Jongik Jeon (KAIST, Deajeon, Korea, Republic of)Chang Hee Lee (KAIST (Korea Advanced Institute of Science and Technology), Daejoen, Korea, Republic of)
Perfume and fragrance have captivated people for centuries across different cultures. Inspired by the ephemeral nature of sprayable olfactory interactions and experiences, we explore the potential of applying a similar interaction principle to the auditory modality. In this paper, we present SoundMist, a sonic interaction method that enables users to generate ephemeral auditory presences by physically dispersing a liquid into the air, much like the fading phenomenon of fragrance. We conducted a study to understand the experiential factors inherent in sprayable sound interaction and held an ideation workshop to identify potential design spaces or opportunities that this interaction could shape. Our findings, derived from thematic analysis, suggest that physically sprayable sound interaction can induce experiences related to four key factors—materiality of sound produced by dispersed liquid particles, different sounds entangled with each liquid, illusive perception of temporally floating sound, and enjoyment derived from blending different sounds—and can be applied to artistic practices, safety indications, multisensory approaches, and emotional interfaces.
4
Customizing Emotional Support: How Do Individuals Construct and Interact With LLM-Powered Chatbots
Xi Zheng (City University of Hong Kong, Hong Kong, China)Zhuoyang LI (City University of Hong Kong, Hong Kong, China)Xinning Gui (The Pennsylvania State University, University Park, Pennsylvania, United States)Yuhan Luo (City University of Hong Kong, Hong Kong, China)
Personalized support is essential to fulfill individuals’ emotional needs and sustain their mental well-being. Large language models (LLMs), with great customization flexibility, hold promises to enable individuals to create their own emotional support agents. In this work, we developed ChatLab, where users could construct LLM-powered chatbots with additional interaction features including voices and avatars. Using a Research through Design approach, we conducted a week-long field study followed by interviews and design activities (N = 22), which uncovered how participants created diverse chatbot personas for emotional reliance, confronting stressors, connecting to intellectual discourse, reflecting mirrored selves, etc. We found that participants actively enriched the personas they constructed, shaping the dynamics between themselves and the chatbot to foster open and honest conversations. They also suggested other customizable features, such as integrating online activities and adjustable memory settings. Based on these findings, we discuss opportunities for enhancing personalized emotional support through emerging AI technologies.
4
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pat Pataranutaporn (Massachusetts Institute of Technology, Boston, Massachusetts, United States)Chayapatr Archiwaranguprok (University of the Thai Chamber of Commerce, Bangkok, Thailand)Samantha W. T.. Chan (MIT Media Lab, Cambridge, Massachusetts, United States)Elizabeth Loftus (UC Irvine, Irvine, California, United States)Pattie Maes (MIT Media Lab, Cambridge, Massachusetts, United States)
AI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.
4
EchoBreath: Continuous Respiratory Behavior Recognition in the Wild via Acoustic Sensing on Smart Glasses
Kaiyi Guo (shanghai jiao tong university, shanghai, China)Qian Zhang (Shanghai Jiao Tong University, Shanghai, China)Dong Wang (Shanghai Jiao Tong University, Shanghai, China)
Monitoring the occurrence count of abnormal respiratory symptoms helps provide critical support for respiratory health. While this is necessary, there is still a lack of an unobtrusive and reliable way that can be effectively used in real-world settings. In this paper, we present EchoBreath, a passive and active acoustic combined sensing system for abnormal respiratory symptoms monitoring. EchoBreath novelly uses the speaker and microphone under the frame of the glasses to emit ultrasonic waves and capture both passive sounds and echo profiles, which can effectively distinguish between subject-aware behaviors and background noise. Furthermore, A lightweight neural network with the 'Null' class and open-set filtering mechanisms substantially improves real-world applicability by eliminating unrelated activity. Our experiments, involving 25 participants, demonstrate that EchoBreath can recognize 6 typical respiratory symptoms in a laboratory setting with an accuracy of 93.1%. Additionally, an in-the-semi-wild study with 10 participants further validates that EchoBreath can continuously monitor respiratory abnormalities under real-world conditions. We believe that EchoBreath can serve as an unobtrusive and reliable way to monitor abnormal respiratory symptoms.
3
"It Brought the Model to Life": Exploring the Embodiment of Multimodal I3Ms for People who are Blind or have Low Vision
Samuel Reinders (Monash University, Melbourne, Australia)Matthew Butler (Monash University, Melbourne, Australia)Kim Marriott (Monash University, Melbourne, Australia)
3D-printed models are increasingly used to provide people who are blind or have low vision (BLV) with access to maps, educational materials, and museum exhibits. Recent research has explored interactive 3D-printed models (I3Ms) that integrate touch gestures, conversational dialogue, and haptic vibratory feedback to create more engaging interfaces. Prior research with sighted people has found that imbuing machines with human-like behaviours, i.e., embodying them, can make them appear more lifelike, increasing social perception and presence. Such embodiment can increase engagement and trust. This work presents the first exploration into the design of embodied I3Ms and their impact on BLV engagement and trust. In a controlled study with 12 BLV participants, we found that I3Ms using specific embodiment design factors, such as haptic vibratory and embodied personified voices, led to an increased sense of liveliness and embodiment, as well as engagement, but had mixed impact on trust.
3
SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization
Artem Dementyev (Google Inc., Mountain View, California, United States)Dimitri Kanevsky (Google, Mountain View, California, United States)Samuel Yang (Google, Mountain View, California, United States)Mathieu Parvaix (Google Research, Mountain View, California, United States)Chiong Lai (Google, Mountain View, California, United States)Alex Olwal (Google Inc., Mountain View, California, United States)
Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.
3
iGripper: A Semi-Active Handheld Haptic VR Controller Based on Variable Stiffness Mechanism
Ke Shi (Southeast University, Nanjing, China)Tongshu Chen (Southeast University, Nanjing, China)Yichen Xiang (Southeast University, Nanjing, China)Ye Li (Southeast University, Nanjing, Jiangsu, China)Lifeng Zhu (Southeast University, Nanjing, Jiangsu, China)Aiguo Song (Southeast University, Nanjing, Jiangsu, China)
We introduce iGripper, a handheld haptic controller designed to render stiffness feedback for gripping and clamping both rigid and elastic objects in virtual reality. iGripper directly adjusts physical stiffness by using a small linear actuator to modify the spring’s position along a lever arm, with feedback force generated by the spring's reaction to the user's input. This enables iGripper to render stiffness from zero to any specified value, determined by the spring's inherent stiffness. Additionally, a blocking mechanism is designed to provide fully rigid feedback to enlarge the rendering range. Compared to active controllers, iGripper offers a broad range of force and stiffness feedback without requiring high-power actuators. Unlike many passive controllers, which provide only braking force, iGripper, as a semi-active controller, delivers controllable elastic force feedback. We present the iGripper’s design, performance evaluation, and user studies, comparing its realism with a commercial impedance-type grip device.
3
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation
Jessica He (IBM Research, Yorktown Heights, New York, United States)Stephanie Houde (IBM Research, Cambridge, Massachusetts, United States)Justin D.. Weisz (IBM Research AI, Yorktown Heights, New York, United States)
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.
3
What Comes After Noticing?: Reflections on Noticing Solar Energy and What Came Next
Angella Mackey (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)David NG. McCallum (Rotterdam University of Applied Science, Rotterdam, Netherlands)Oscar Tomico (Eindhoven University of Technology, Eindhoven, Netherlands)Martijn de Waal (Amsterdam University of Applied Sciences, Amsterdam, Netherlands)
Many design researchers have been exploring what it means to take a more-than-human design approach in their practice. In particular, the technique of “noticing” has been explored as a way of intentionally opening a designer’s awareness to more-than-human worlds. In this paper we present autoethnographic accounts of our own efforts to notice solar energy. Through two studies we reflect on the transformative potential of noticing the more-than-human, and the difficulties in trying to sustain this change in oneself and one’s practice. We propose that noticing can lead to activating exiled capacities within the noticer, relational abilities that lie dormant in each of us. We also propose that emphasising sense-fullness in and through design can be helpful in the face of broader psychological or societal boundaries that block paths towards more relational ways of living with non-humans.
3
Since U Been Gone: Augmenting Context-Aware Transcriptions for Re-Engaging in Immersive VR Meetings
Geonsun Lee (University of Maryland, College Park, Maryland, United States)Yue Yang (Stanford University, Stanford, California, United States)Jennifer Healey (Adobe Research, San Jose, California, United States)Dinesh Manocha (University of Maryland , College Park, Maryland, United States)
Maintaining engagement in immersive meetings is challenging, particularly when users must catch up on missed content after disruptions. While transcription interfaces can help, table-fixed panels have the potential to distract users from the group, diminishing social presence, while avatar-fixed captions fail to provide past context. We present EngageSync, a context-aware avatar-fixed transcription interface that adapts based on user engagement, offering live transcriptions and LLM-generated summaries to enhance catching up while preserving social presence. We implemented a live VR meeting setup for a 12-participant formative study and elicited design considerations. In two user studies with small (3 avatars) and mid-sized (7 avatars) groups, EngageSync significantly improved social presence (𝑝 < .05) and time spent gazing at others in the group instead of the interface over table-fixed panels. Also, it reduced re-engagement time and increased information recall (𝑝 < .05) over avatar-fixed interfaces, with stronger effects in mid-sized groups (𝑝 < .01).
2
Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives
Meredith Ringel. Morris (Google DeepMind, Seattle, Washington, United States)Jed R.. Brubaker (University of Colorado Boulder, Boulder, Colorado, United States)
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death; indeed, the past year has seen a boom in startups purporting to offer such services. We call these generative ghosts since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we reflect on the history of technologies for AI afterlives, including current early attempts by individual enthusiasts and startup companies to create generative ghosts. We then introduce a novel design space detailing potential implementations of generative ghosts. We use this analytic framework to ground a discussion of the practical and ethical implications of various approaches to designing generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to better understand the risk/benefit landscape of this novel technology to ultimately empower people who wish to create and interact with AI afterlives to do so in a beneficial manner.
2
HapticGen: Generative Text-to-Vibration Model for Streamlining Haptic Design
Youjin Sung (KAIST, Daejeon, Korea, Republic of)Kevin John (Arizona State University, Tempe, Arizona, United States)Sang Ho Yoon (KAIST, Daejeon, Korea, Republic of)Hasti Seifi (Arizona State University, Tempe, Arizona, United States)
Designing haptic effects is a complex, time-consuming process requiring specialized skills and tools. To support haptic design, we introduce HapticGen, a generative model designed to create vibrotactile signals from text inputs. We conducted a formative workshop to identify requirements for an AI-driven haptic model. Given the limited size of existing haptic datasets, we trained HapticGen on a large, labeled dataset of 335k audio samples using an automated audio-to-haptic conversion method. Expert haptic designers then used HapticGen's integrated interface to prompt and rate signals, creating a haptic-specific preference dataset for fine-tuning. We evaluated the fine-tuned HapticGen with 32 users, qualitatively and quantitatively, in an A/B comparison against a baseline text-to-audio model with audio-to-haptic conversion. Results show significant improvements in five haptic experience (e.g., realism) and system usability factors (e.g., future use). Qualitative feedback indicates HapticGen streamlines the ideation process for designers and helps generate diverse, nuanced vibrations.
2
Virtual Voyages: Evaluating the Role of Real-Time and Narrated Virtual Tours in Shaping User Experience and Memories
Lillian Maria. Eagan (University of Otago, Dunedin, Otago, New Zealand)Jacob Young (University of Otago, Dunedin, Otago, New Zealand)Jesse Bering (University of Otago, Dunedin, Otago, New Zealand)Tobias Langlotz (University of Otago, Dunedin, Otago, New Zealand)
Immersive technologies are capable of transporting people to distant or inaccessible environments that they might not otherwise visit. Practitioners and researchers alike are discovering new ways to replicate and enhance existing tourism experiences using virtual reality, yet few controlled experiments have studied how users perceive virtual tours of real-world locations. In this paper we present an initial exploration of a new system for virtual tourism, measuring the effects of real-time experiences and storytelling on presence, place attachment, and user memories of the destination. Our results suggest that narrative plays an important role in inducing presence within and attachment to the destination, while livestreaming can further increase place attachment while providing flexible, tailored experiences. We discuss the design and evaluation of our system, including feedback from our tourism partners, and provide insights into current limitations and further opportunities for virtual tourism.
2
``You Go Through So Many Emotions Scrolling Through Instagram'': How Teens Use Instagram To Regulate Their Emotions
Katie Davis (University of Washington, Seattle, Washington, United States)Rotem Landesman (University of Washington, Seattle, Washington, United States)Jina Yoon (University of Washington, Seattle, Washington, United States)JaeWon Kim (University of Washington, Seattle, Washington, United States)Daniela E. Munoz Lopez (University of Washington, Seattle, Washington, United States)Lucia Magis-Weinberg (University of Washington, SEATTLE, Washington, United States)Alexis Hiniker (University of Washington, Seattle, Washington, United States)
Prior work has documented various ways that teens use social media to regulate their emotions. However, little is known about what these processes look like on a moment-by-moment basis. We conducted a diary study to investigate how teens (N=57, Mage = 16.3 years) used Instagram to regulate their emotions. We identified three kinds of emotionally-salient drivers that brought teens to Instagram and two types of behaviors that impacted their emotional experiences on the platform. Teens described going to Instagram to escape, to engage, and to manage the demands of the platform. Once on Instagram, their primary behaviors consisted of mindless diversions and deliberate acts. Although teens reported many positive emotional responses, the variety, unpredictability, and habitual nature of their experiences revealed Instagram to be an unreliable tool for emotion regulation (ER). We present a model of teens’ ER processes on Instagram and offer design considerations for supporting adolescent emotion regulation.
2
Beyond Vacuuming: How Can We Exploit Domestic Robots’ Idle Time?
Yoshiaki Shiokawa (University of Bath, Bath, United Kingdom)Winnie Chen (University of Bath, Bath, United Kingdom)Aditya Shekhar Nittala (University of Calgary, Calgary, Alberta, Canada)Jason Alexander (University of Bath, Bath, United Kingdom)Adwait Sharma (University of Bath, Bath, United Kingdom)
We are increasingly adopting domestic robots (e.g., Roomba) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. They typically possess advanced mobility and sensing capabilities, and therefore have significant potential applications beyond their designed use. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles. We conducted two studies: an online survey (n=50) to understand current usage patterns of these robots within homes and an exploratory study (n=12) with HCI and HRI experts. Our thematic analysis revealed 12 key dimensions for developing interactions with domestic robots and outlined over 100 use cases, illustrating how these robots can offer proactive assistance and provide privacy. Finally, we implemented a proof-of-concept prototype to demonstrate the feasibility of reappropriating domestic robots for diverse ubiquitous computing applications.
2
Dreamcrafter: Immersive Editing of 3D Radiance Fields Through Flexible, Generative Inputs and Outputs
Cyrus Vachha (University of California, Berkeley, Berkeley, California, United States)Yixiao Kang (University of California, Berkeley, Berkeley, California, United States)Zach Dive (University of California, Berkeley, Berkeley, California, United States)Ashwat Chidambaram (University of California, Berkeley, Berkeley, California, United States)Anik Gupta (University of California, Berkeley, Berkeley, California, United States)Eunice Jun (University of California, Los Angeles, Los Angeles, California, United States)Bjoern Hartmann (UC Berkeley, Berkeley, California, United States)
Authoring 3D scenes is a central task for spatial computing applications. Competing visions for lowering existing barriers are (1) focus on immersive, direct manipulation of 3D content or (2) leverage AI techniques that capture real scenes (3D Radiance Fields such as, NeRFs, 3D Gaussian Splatting) and modify them at a higher level of abstraction, at the cost of high latency. We unify the complementary strengths of these approaches and investigate how to integrate generative AI advances into real-time, immersive 3D Radiance Field editing. We introduce Dreamcrafter, a VR-based 3D scene editing system that: (1) provides a modular architecture to integrate generative AI algorithms; (2) combines different levels of control for creating objects, including natural language and direct manipulation; and (3) introduces proxy representations that support interaction during high-latency operations. We contribute empirical findings on control preferences and discuss how generative AI interfaces beyond text input enhance creativity in scene editing and world building.
2
Invisible Light Touch: Standing Balance Improvement by Mid-Air Haptic Feedback
Arinobu Niijima (NTT Corporation, Yokosuka, Kanagawa, Japan)Masato Shindo (NTT Corporation, Yokosuka, Kanagawa, Japan)Ryosuke Aoki (NTT Corporation, Yokosuka, Kanagawa, Japan)
Improving standing balance is critical for preventing falls and ensuring the well-being of older adults. In this paper, we present Invisible Light Touch (ILT), a mid-air haptic feedback application designed to improve standing balance by utilizing the light touch effect, a well-documented phenomenon in medical research. The light touch effect refers to improved balance when a person lightly touches a surface, such as a wall or handrail, with a force of 1 N or less. We replicate this effect utilizing focused ultrasound to create a tactile point in mid-air. When users interact with this invisible tactile point, they experience the light touch effect, which subsequently improves their balance. We conducted a pilot study with 29 participants and a user study with 25 older adults, evaluating the balance improvement by measuring the center of pressure trajectory. The results confirmed that standing balance improved significantly when using the ILT.
2
ProtoPCB: Reclaiming Printed Circuit Board E-waste as Prototyping Material
Jasmine Lu (University of Chicago, Chicago, Illinois, United States)Sai Rishitha Boddu (University of Chicago, Chicago, Illinois, United States)Pedro Lopes (University of Chicago, Chicago, Illinois, United States)
We propose an interactive tool that enables reusing printed circuit boards (PCB) as prototyping materials to implement new circuits — this extends the utility of PCBs rather than discards them as e-waste. To enable this, our tool takes a user’s desired circuit schematic and analyzes its components and connections to find methods of creating the user’s circuit on discarded PCBs (e.g., e-waste, old prototypes). In our technical evaluation, we utilized our tool across a diverse set of PCBs and input circuits to characterize how often circuits could be implemented on a different board, implemented with minor interventions (trace-cutting or bodge-wiring), or implemented on a combination of multiple boards — demonstrating how our tool assists with exhaustive matching tasks that a user would not likely perform manually. We believe our tool offers: (1) a new approach to prototyping with electronics beyond the limitations of breadboards and (2) a new approach to reducing e-waste during electronics prototyping.
2
Wordplay: Accessible, Multilingual, Interactive Typography
Amy J. Ko (University of Washington, Seattle, Washington, United States)Carlos Aldana Lira (Middle Tennessee State University, Murfreesboro, Tennessee, United States)Isabel Amaya (University of Washington, Seattle, Washington, United States)
Educational programming languages (EPLs) are rarely designed to be both accessible and multilingual. We describe a 30-month community-engaged case study to surface design challenges at this intersection, creating Wordplay, an accessible, multilingual platform for youth to program interactive typography. Wordplay combines functional programming, multilingual text, multimodal editors, time travel debugging, and teacher- and youth-centered community governance. Across five 2-hour focus group sessions, a group of 6 multilingual students and teachers affirmed many of the platform’s design choices, but reinforced that design at the margins was unfinished, including support for limited internet access, decade-old devices, and high turnover of device use by students with different access, language, and attentional needs. The group also highlighted open source platforms like GitHub as unsuitable for engaging youth. These findings suggest that EPLs that are both accessible and language-inclusive are feasible, but that there remain many design tensions between language design, learnability, accessibility, culture, and governance.
2
PCB Renewal: Iterative Reuse of PCB Substrates for Sustainable Electronic Making
Zeyu Yan (University Of Maryland, College Park, Maryland, United States)Advait Vartak (University of Maryland, College Park, Maryland, United States)Jiasheng Li (University of Maryland, College Park, Maryland, United States)Zining Zhang (University of Maryland, College Park, Maryland, United States)Huaishu Peng (University of Maryland, College Park, Maryland, United States)
PCB (printed circuit board) substrates are often single-use, leading to material waste in electronics making. We introduce PCB Renewal , a novel technique that "erases" and "reconfigures" PCB traces by selectively depositing conductive epoxy onto outdated areas, transforming isolated paths into conductive planes that support new traces. We present the PCB Renewal workflow, evaluate its electrical performance and mechanical durability, and model its sustainability impact, including material usage, cost, energy consumption, and time savings. We develop a software plug-in that guides epoxy deposition, generates updated PCB profiles, and calculates resource usage. To demonstrate PCB Renewal’s effectiveness and versatility, we repurpose a single PCB across four design iterations spanning three projects: a camera roller, a WiFi radio, and an ESPboy game console. We also show how an outsourced double-layer PCB can be reconfigured, transforming it from an LED watch to an interactive cat toy. The paper concludes with limitations and future directions.
2
FlexEar-Tips: Shape-Adjustable Ear Tips Using Pressure Control
Takashi Amesaka (Keio University, Yokohama, Japan)Takumi Yamamoto (Keio University, Yokohama, Japan)Hiroki Watanabe (Future University Hakodate, Hakodate, Japan)Buntarou Shizuki (University of Tsukuba, Tsukuba, Ibaraki, Japan)Yuta Sugiura (Keio University, Yokohama, Japan)
We introduce FlexEar-Tips, a dynamic ear tip system designed for the next-generation hearables. The ear tips are controlled by an air pump and solenoid valves, enabling size adjustments for comfort and functionality. FlexEar-Tips includes an air pressure sensor to monitor ear tip size, allowing it to adapt to environmental conditions and user needs. In the evaluation, we conducted a preliminary investigation of the size control accuracy and the minimum amount of variability of haptic perception in the user's ear. We then evaluated the user's ability to identify patterns in the haptic notification system, the impact on the music listening experience, the relationship between the size of the ear tips and the sound localization ability, and the impact on the reduction of humidity in the ear using a model. We proposed new interaction modalities for adaptive hearables and discussed health monitoring, immersive auditory experiences, haptics notifications, biofeedback, and sensing.
2
SkinHaptics: Exploring Skin Softness Perception and Virtual Body Embodiment Techniques to Enhance Self-Haptic Interactions
Jungeun Lee (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)Minha Jeon (Kyung Hee University, Yongin, Korea, Republic of)Jinyoung Lee (Kyung Hee University, Suwon, Korea, Republic of)Seungmoon Choi (Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of)Seungjae Oh (Kyung Hee University, Yongin, Korea, Republic of)
Providing haptic feedback for soft, deformable objects is challenging, requiring complex mechanical hardware combined with modeling and rendering software. As an alternative, we advance the concept of self-haptics, where the user's own body delivers physical feedback, to convey dynamically varying softness in VR. Skin can exhibit different levels of contact softness by altering the biomechanical state of the body. We propose SkinHaptics, a device-free approach that changes the states of musculoskeletal structures and virtual hand-object representations. In this study, we conduct three experiments to demonstrate SkinHaptics. Using the same scale, we measure skin softness across various hand poses and contact points and evaluate the just noticeable difference in skin softness. We investigate the effect of hand-object representations on self-haptic interactions. Our findings indicate that the visual representations have a significant influence on the embodiment of a self-haptic hand, and the degree of the hand embodiment strongly affects the haptic experience.
2
TogetherReflect: Supporting Emotional Expression in Couples Through a Collaborative Virtual Reality Experience
Nadine Wagener (University of Bremen, Bremen, Germany)Daniel Christian. Albensoeder (Universität Bremen, Bremen, Germany)Leon Reicherts (University College London, London, United Kingdom)Paweł W. Woźniak (TU Wien, Vienna, Austria)Yvonne Rogers (UCL , London, United Kingdom)Jasmin Niess (University of Oslo, Oslo, Norway)
Navigating emotional conflicts within relationships can be challenging. People often struggle to express their emotions during a conflict, which can lead to misunderstandings and unresolved feelings. To facilitate deeper emotional expression, we developed TogetherReflect, a multi-user Virtual Reality (VR) experience designed for couples. Partners first draw their emotions related to a shared conflict in VR, allowing for individual expression and self-reflection. They then invite each other into their drawings to discuss their feelings, before drawing together on a shared canvas to reaffirm their love and commitment. Throughout this process, TogetherReflect provides prompts and guidance, aiming to foster self-reflection and communication skills. We exploratory evaluated the experience with 10 couples (n=20). Our findings indicate that TogetherReflect deepens personal emotional insights, fosters mutual understanding, and strengthens relational bonds. We highlight the potential of guided VR experiences to transform conflict resolution in intimate relationships and offer design considerations for future development.
2
FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors
Ruonan Zheng (Xiamen University, Xiamen, China)Jiawei Fang (Xiamen University, Xiamen, China)Yuan Yao (School of Informatics, Xiamen University, Xiamen, Fujian, China)Xiaoxia Gao (Xiamen University, Xiamen, Fujian, China)Chengxu Zuo (school of imformatics, Xiamen, Fujian, China)Shihui Guo (Software School, Xiamen, Fujian, China)Yiyue Luo (University of Washington, Seattle, Washington, United States)
What if our clothes could capture our body motion accurately? This paper introduces Flexible Inertial Poser (FIP), a novel motion-capturing system using daily garments with two elbow-attached flex sensors and four Inertial Measurement Units (IMUs). To address the inevitable sensor displacements in loose wearables which degrade joint tracking accuracy significantly, we identify the distinct characteristics of the flex and inertial sensor displacements and develop a Displacement Latent Diffusion Model and a Physics-informed Calibrator to compensate for sensor displacements based on such observations, resulting in a substantial improvement in motion capture accuracy. We also introduce a Pose Fusion Predictor to enhance multimodal sensor fusion. Extensive experiments demonstrate that our method achieves robust performance across varying body shapes and motions, significantly outperforming SOTA IMU approaches with a 19.5% improvement in angular error, a 26.4% improvement in elbow angular error, and a 30.1% improvement in positional error. FIP opens up opportunities for ubiquitous human-computer interactions and diverse interactive applications such as Metaverse, rehabilitation, and fitness analysis. Our project page can be seen at https://fangjw-0722.github.io/FIP.github.io/
2
MotionBlocks: Modular Geometric Motion Remapping for More Accessible Upper Body Movement in Virtual Reality
Johann Wentzel (University of Waterloo, Waterloo, Ontario, Canada)Alessandra Luz (University of Waterloo, Waterloo, Ontario, Canada)Martez E. Mott (Microsoft Research, Redmond, Washington, United States)Daniel Vogel (University of Waterloo, Waterloo, Ontario, Canada)
Movement-based spatial interaction in VR can present significant challenges for people with limited mobility, particularly due to the mismatch between the upper body motion a VR app requires and the user's capabilities. We describe MotionBlocks, an approach which enables 3D spatial input with smaller motions or simpler input devices using modular geometric motion remapping. A formative study identifies common accessibility issues within VR motion design, and informs a design language of VR motions that fall within simple geometric primitives. These 3D primitives enable collapsing spatial or non-spatial input into a normalized input vector, which is then expanded into a second 3D primitive representing larger, more complex 3D motions. An evaluation with people with mobility limitations found that using geometric primitives for highly customized upper body input remapping reduced physical workload, temporal workload, and perceived effort.
2
Archaeological Gameworld Affordances: A Grounded Theory of How Players Interpret Environmental Storytelling
Florence Smith Nicholls (Queen Mary University of London, London, United Kingdom)Michael Cook (King's College London, London, United Kingdom)
Environmental storytelling is a design technique commonly used to convey narrative through assemblages of content in video games. To date there has been limited empirical work investigating how and on what basis players form interpretations about game environments. We report on a study in which participants (N=202) played a game about exploring a procedurally generated ruined village and were then surveyed on their interpretations. We draw on methods and theory from archaeology - a field that specialises in the interpretation of material remains - to support a grounded theory analysis of the survey responses, from which we form the theory of an archaeological gameworld mental model. Our study draws a novel link between affordance theory, archaeological knowledge production and game systems, and contributes new theoretical concepts that can be applied to procedurally generated and handcrafted methods in game design, narrative design and game preservation.
2
From Operation to Cognition: Automatic Modeling Cognitive Dependencies from User Demonstrations for GUI Task Automation
Yiwen Yin (Tsinghua University, Beijing, China)Yu Mei (Tsinghua University, Beijing, China)Chun Yu (Tsinghua University, Beijing, China)Toby Jia-Jun. Li (University of Notre Dame, Notre Dame, Indiana, United States)Aamir Khan Jadoon (Tsinghua University, Beijing, China)Sixiang Cheng (Tsinghua University, Beijing, China)Weinan Shi (Tsinghua University, Beijing, China)Mohan Chen (Tsinghua University, Beijing, China)Yuanchun Shi (Tsinghua University, Beijing, China)
Traditional Programming by Demonstration (PBD) systems primarily automate tasks by recording and replaying operations on Graphical User Interfaces (GUIs), without fully considering the cognitive processes behind operations. This limits their ability to generalize tasks with interdependent operations to new contexts (e.g. collecting and summarizing introductions depending on different search keywords from varied websites). We propose TaskMind, a system that automatically identifies the semantics of operations, and the cognitive dependencies between operations from demonstrations, building a user-interpretable task graph. Users modify this graph to define new task goals, and TaskMind executes the graph to dynamically generalize new parameters for operations, with the integration of Large Language Models (LLMs). We compared TaskMind with a baseline end-to-end LLM which automates tasks from demonstrations and natural language commands, without task graph. In studies with 20 participants on both predefined and customized tasks, TaskMind significantly outperforms the baseline in both success rate and controllability.
2
Living Bento: Heartbeat-Driven Noodles for Enriched Dining Dynamics
Weijen Chen (Keio University Graduate School of Media Design, Yokohama, Japan)Qingyuan Gao (Keio University Graduate School of Media Design, Yokohama, Japan)Hu Zheng (Keio University Graduate School of Media Design, Yokohama, Japan)Kouta Minamizawa (Keio University Graduate School of Media Design, Yokohama, Japan)Yun Suen Pai (University of Auckland, Auckland, New Zealand)
To enhance focused eating and dining socialization, previous Human-Food Interaction research has indicated that external devices can support these dining objectives and immersion. However, methods that focus on the food itself and the diners themselves have remained underdeveloped. In this study, we integrated biofeedback with food, utilizing diners' heart rates as a source of the food's appearance to promote focused eating and dining socialization. By employing LED lights, we dynamically displayed diners' real-time physiological signals through the transparency of the food. Results revealed significant effects on various aspects of dining immersion, such as awareness perceptions, attractiveness, attentiveness to each bite, and emotional bonds with the food. Furthermore, to promote dining socialization, we established a “Sharing Bio-Sync Food” dining system to strengthen emotional connections between diners. Based on these findings, we developed tableware that integrates biofeedback into the culinary experience.
2
Reshaping Human-animal Relationships: Exploring Lemur and Human Enrichment through Smell, Sound, and Sight
Jiaqi Wang (the University of Glasgow, Glasgow, United Kingdom)Stephen Anthony. Brewster (University of Glasgow, Glasgow, United Kingdom)Ilyena Hirskyj-Douglas (The University of Glasgow, Glasgow, United Kingdom)
Zoos aim to uphold high animal welfare standards while educating the public, yet the direct interactions that attract visitors can negatively impact the animals. Exploring technological solutions to reshape this human-animal relationship in zoos, we developed a novel device allowing lemurs to trigger olfactory, auditory, and visual stimuli in their enclosure. Over 63 days, lemurs engaged most with multimodal stimuli and with visual the least. We then created a similar device for zoo visitors to educate them about lemurs and their stimuli choices. Deploying for 20 days (no devices, lemur-only, visitor-only, and both devices), we examined the impact on visitor behaviour, education, empathy, and experience. From 968 questionnaires and 25,782 visitors, we found that using technology on the lemur and visitor sides jointly significantly enhanced all measured visitor factors, even if the visitors did not directly interact with the device or observe lemurs using theirs. This approach supports long-term conservation and visitor education efforts.
2
Designing Urban Noticing Probes for Community Animals and Cohabitation in Türkiye
Sena Cucumak (Futurewell: CoCreation and Wellbeing Lab, Media and Visual Arts, Koç University, Istanbul, Turkey)Ozge Subasi (Koc University, Istanbul, Turkey, Turkey)
Design tools and probing, in particular, have long offered critical perspectives in HCI, broadening the understanding of who benefits from the design. Further, the designerly implementation of critical perspectives and theories using tools such as probes can support HCI designers with theoretically informed dialogical tools. However, these approaches and processes are majorly designed to understand human interactions. In this paper, we introduce urban noticing probes developed to decentre the humans in multispecies interactions by following the arts of noticing theory: noticing into, for, and through within urban relationality, focusing on the case of community animals in Türkiye. Our goal is to create a better understanding of the functions of "urban noticing probes" for HCI designers and researchers to (1) gain relational and reflexive awareness, (2) identify intervention spaces for multispecies cohabitation, and (3) explore future design directions for urban noticing probes.
2
Sonic Delights: Exploring the Design of Food as An Auditory-Gustatory Interface
Jialin Deng (Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia)Yinyi Li (Monash University, Melbourne, Victoria, Australia)Hongyue Wang (Monash University, Melbourne, Victoria, Australia)Ziqi Fang (Imperial College London, London, United Kingdom)Florian ‘Floyd’. Mueller (Monash University, Melbourne, VIC, Australia)
While interest in blending sound with culinary experiences has grown in Human-Food Interaction (HFI), the significance of food’s material properties in shaping sound-related interactions has largely been overlooked. This paper explores the opportunity to enrich the HFI experience by treating food not merely as passive nourishment but as an integral material in computational architecture with input/output capabilities. We introduce “Sonic Delights,” where food is a comestible auditory-gustatory interface to enable users to interact with and consume digital sound. This concept redefines food as a conduit for interactive auditory engagement, shedding light on the untapped multisensory possibilities of merging taste with digital sound. An associated study allowed us to articulate design insights for forthcoming HFI endeavors that seek to weave food into multisensory design, aiming to further the integration of digital interactivity with the culinary arts.
2
Exploring Modular Prompt Design for Emotion and Mental Health Recognition
Minseo Kim (Hankuk University of Foreign Studies, Seoul, Korea, Republic of)Taemin Kim (Hansung University, Seoul, Korea, Republic of)Thu Hoang Anh. Vo (Korea Advanced Institute of Science & Technology, Daejeon, Korea, Republic of)Yugyeong Jung (KAIST, Daejeon, Korea, Republic of)Uichin Lee (KAIST, Daejeon, Korea, Republic of)
Recent advances in large language models (LLM) offered human-like capabilities for comprehending emotion and mental states. Prior studies explored diverse prompt engineering techniques for improving classification performance, but there is a lack of analysis of prompt design space and the impact of each component. To bridge this gap, we conduct a qualitative thematic analysis of existing prompts for emotion and mental health classification tasks to define the key components for prompt design space. We then evaluate the impact of major prompt components, such as persona and task instruction, on classification performance by using four LLM models and five datasets. Modular prompt design offers new insights into examining performance variability as well as promoting transparency and reproducibility in LLM-based tasks within health and well-being intervention systems.
2
Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Jiaji Li (MIT, Cambridge, Massachusetts, United States)Shuyue Feng (Zhejiang University, Hangzhou, China)Maxine Perroni-Scharf (MIT, Cambridge, Massachusetts, United States)Yujia Liu (Tsinghua University, Beijing, China)Emily Guan (Pratt Institute, Brooklyn, New York, United States)Guanyun Wang (Zhejiang University, Hangzhou, China)Stefanie Mueller (MIT CSAIL, Cambridge, Massachusetts, United States)
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
2
Attracting Fingers with Waves: Potential Fields Using Active Lateral Forces Enhance Touch Interactions
Zhaochong Cai (Delft University of Technology, Delft, Netherlands)David Abbink (Delft University of Technology, Delft, Netherlands)Michael Wiertlewski (Delft University of Technology, Delft, Netherlands)
Touchscreens and touchpads offer intuitive interfaces but provide limited tactile feedback, usually just mechanical vibrations. These devices lack continuous feedback to guide users’ fingers toward specific directions. Recent innovations in surface haptic devices, however, leverage ultrasonic traveling waves to create active lateral forces on a bare fingertip. This paper \revised{investigates the effects and design possibilities of active forces feedback in touch interactions by rendering artificial potential fields on a touchpad.Three user studies revealed that: (1) users perceived attractive and repulsive fields as bumps and holes with similar detection thresholds; (2) step-wise force fields improved targeting by 22.9% compared to friction-only methods; and (3) active force fields effectively communicated directional cues to the users. Several applications were tested, with user feedback favoring this approach for its enhanced tactile experience, added enjoyment, realism, and ease of use.
2
This Game SUX: Why & How to Design Sh@*!y User Experiences
Michelle V. Cormier (Monash University, Clayton, Victoria, Australia)Shano Liang (Worcester Polytechnic Institute, Worcester, Massachusetts, United States)Bill Hamilton (New Mexico State University, Las Cruces, New Mexico, United States)Nicolas LaLone (Rochester Institute of Technology, Rochester, New York, United States)Rose Bohrer (Worcester Polytechnic Institute, Worcester, Massachusetts, United States)Phoebe O.. Toups Dugas (Monash University, Clayton, Victoria, Australia)
While normative – "good" – game design and user experiences have been established, we look to games that challenge those notions. Intentional frustration and failure can be worthwhile. Through a reflexive thematic analysis of 31 games we identify how intentionally non-normative design choices lead to meaningful experiences. Working within the established Mechanics Dynamics Aesthetics (MDA) Game Design Framework, we lay out themes to design Shitty User Experiences (SUX). We contribute SUX MDA themes for designers and researchers to counter the status quo and identify new forms of play and interaction.
2
An Approach to Elicit Human-Understandable Robot Expressions to Support Human-Robot Interaction
Jan Leusmann (LMU Munich, Munich, Germany)Steeven Villa (LMU Munich, Munich, Germany)Thomas Liang (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Chao Wang (Honda Research Institute Europe, Offenbach/Main, Germany)Albrecht Schmidt (LMU Munich, Munich, Germany)Sven Mayer (LMU Munich, Munich, Germany)
Understanding the intentions of robots is essential for natural and seamless human-robot collaboration. Ensuring that robots have means for non-verbal communication is a basis for intuitive and implicit interaction. For this, we describe an approach to elicit and design human-understandable robot expressions. We outline the approach in the context of non-humanoid robots. We paired human mimicking and enactment with research from gesture elicitation in two phases: first, to elicit expressions, and second, to ensure they are understandable. We present an example application through two studies (N=16 \& N=260) of our approach to elicit expressions for a simple 6-DoF robotic arm. We show that the approach enabled us to design robot expressions that signal curiosity and interest in getting attention. Our main contribution is an approach to generate and validate understandable expressions for robots, enabling more natural human-robot interaction.
2
User-defined Co-speech Gesture Design with Swarm Robots
Minh Duc Dang (Simon Fraser University, Burnaby, British Columbia, Canada)Samira Pulatova (Simon Fraser University , Burnaby, British Columbia, Canada)Lawrence H. Kim (Simon Fraser University, Burnaby, British Columbia, Canada)
Non-verbal signals, including co-speech gestures, play a vital role in human communication by conveying nuanced meanings beyond verbal discourse. While researchers have explored co-speech gestures in human-like conversational agents, limited attention has been given to non-humanoid alternatives. In this paper, we propose using swarm robotic systems as conversational agents and introduce a foundational set of swarm-based co-speech gestures, elicited from non-technical users and validated through an online study. This work outlines the key software and hardware requirements to advance research in co-speech gesture generation with swarm robots, contributing to the future development of social robotics and conversational agents.
2
SpatIO: Spatial Physical Computing Toolkit Based on Extended Reality
Seung Hyeon Han (KAIST, Daejeon, Korea, Republic of)Yeeun Han (Department of Industrial Design, KAIST, Daejeon, Korea, Republic of)Kyeongho Park (KAIST, Daejeon, Korea, Republic of)Sangjun Lee (KAIST, Daejeon, Korea, Republic of)Woohun Lee (KAIST, Daejeon, Korea, Republic of)
Proper placement of sensors and actuators is one of the key factors when designing spatial and proxemic interactions. However, current physical computing tools do not effectively support placing components in three-dimensional space, often forcing designers to build and test prototypes without precise spatial configuration. To address this, we propose the concept of spatial physical computing and present SpatIO, an XR-based physical computing toolkit that supports a continuous end-to-end workflow. SpatIO consists of three interconnected subsystems: SpatIO Environment for composing and testing prototypes with virtual sensors and actuators, SpatIO Module for converting virtually placed components into physical ones, and SpatIO Code for authoring interactions with spatial visualization of data flow. Through a comparative user study with 20 designers, we found that SpatIO significantly altered workflow order, encouraged broader exploration of component placement, enhanced spatial correlation between code and components, and promoted in-situ bodily testing.
2
Designing Biofeedback Board Games: The Impact of Heart Rate on Player Experience
Joseph Tu (University of Waterloo, Waterloo, Ontario, Canada)Eugene Kukshinov (University of Waterloo, Waterloo, Ontario, Canada)Reza Hadi Mogavi (University of Waterloo, Waterloo, Ontario, Canada)Derrick M.. Wang (University of Waterloo, Waterloo, Ontario, Canada)Lennart E.. Nacke (University of Waterloo, Waterloo, Ontario, Canada)
Biofeedback provides a unique opportunity to intensify tabletop gameplay. It permits new play styles through digital integration while keeping the tactile appeal of physical components. However, integrating biofeedback systems, like heart rate (HR), into game design needs to be better understood in the literature and still needs to be explored in practice. To bridge this gap, we employed a Research through Design (RtD) approach. This included (1) gathering insights from enthusiast board game designers (𝑛 = 10), (2) conducting two participatory design workshops (𝑛 = 20), (3) prototyping game mechanics with experts (𝑛 = 5), and (4) developing the game prototype artifact One Pulse: Treasure Hunter’s.We identify practical design implementation for incorporating biofeedback, particularly related to heart rate, into tabletop games. Thus, we contribute to the field by presenting design trade-offs for incorporating HR into board games, offering valuable insights for HCI researchers and game designers.
2
Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in Videoconferencing
Carlota Vazquez Gonzalez (King's College London, London, United Kingdom)Timothy Neate (King's College London, London, United Kingdom)Rita Borgo (Kings College London, London, England, United Kingdom)
Videoconferencing is integral to modern work and living. Recently, technologists have sought to leverage data captured -- e.g. from cameras and microphones -- to augment communication. This might mean capturing communication information about verbal (e.g. speech, chat messages), or non-verbal exchanges (e.g. body language, gestures, tone of voice) and using this to mediate -- and potentially improve -- communication. However, such tracking has implications for user experience and raises wider concerns (e.g. privacy). To design tools which account for user needs and preferences, this study investigates perspectives on communication tracking through a global survey and interviews, exploring how daily behaviours and the impact of specific features influence user perspectives. We examine user preferences on non-verbal communication tracking, preferred methods of how this information is conveyed and to whom this should be communicated. Our findings aim to guide the development of non-verbal communication tools which augment videoconferencing that prioritise user needs.
2
"Grab the Chat and Stick It to My Wall": Understanding How Social VR Streamers Bridge Immersive VR Experiences with Streaming Audiences Outside VR
Yang Hu (Clemson University, Clemson, South Carolina, United States)Guo Freeman (Clemson University, Clemson, South Carolina, United States)Ruchi Panchanadikar (Clemson University, Clemson, South Carolina, United States)
Social VR platforms are increasingly transforming online social spaces by enhancing embodied and immersive social interactions within VR. However, how social VR users also share their activities outside the social VR platform, such as on 2D live streaming platforms, is an increasingly popular yet understudied phenomenon that blends social VR and live streaming research. Through 17 interviews with experienced social VR streamers, we unpack social VR streamers' innovative strategies to further blur the boundary between VR and non-VR spaces to engage their audiences and potential limitations of their strategies. We add new insights into how social VR streamers transcend traditional 2D streamer-audience engagement, which also extend our current understandings of cross-reality interactions. Grounded in these insights, we propose design implications to better support more complicated cross-reality dynamics in social VR streaming while mitigating potential tensions, in hopes of achieving more inclusive, engaging, and secure cross-reality environments in the future.
2
DobbyEar: Inducing Body Illusion of Ear Deformation with Haptic Retargeting
Han Shi (Southern University of Science and Technology, Shenzhen, China)Seungwoo Je (SUSTech, Shenzhen, China)
The use of haptic and visual stimuli to create body illusions and enhance body ownership of virtual avatars in virtual reality (VR) has been extensively studied in the fields of psychology and Human-Computer Interaction (HCI). However, previous studies have relied on mechanical devices or corresponding proxies to provide haptic feedback. In this paper, we applied haptic retargeting to induce body illusions by redirecting users’ hand movements, altering their perception of the shape of body parts when touched. Our technique allows for the realization of more precise and complex deformations. We implemented mapping of the ear’s contour, thereby creating illusions of different ear shapes, such as elf ears and dog ears. To determine the scope of retargeting, we conducted a user study to identify the maximum tolerable deviation angle for virtual ears. Subsequently, we explored the impact of haptic retargeting on body ownership of virtual avatars.
2
OnomaCap: Making Non-speech Sound Captions Accessible and Enjoyable through Onomatopoeic Sound Representation
JooYeong Kim (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)Jin-Hyuk Hong (Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of)
Non-speech sounds play an important role in setting the mood of a video and aiding comprehension. However, current non-speech sound captioning practices focus primarily on sound categories, which fails to provide a rich sound experience for d/Deaf and hard-of-hearing (DHH) viewers. Onomatopoeia, which succinctly captures expressive sound information, offers a potential solution but remains underutilized in non-speech sound captioning. This paper investigates how onomatopoeia benefits DHH audiences in non-speech sound captioning. We collected 7,962 sound-onomatopoeia pairs from listeners and developed a sound-onomatopoeia model that automatically transcribes sounds into onomatopoeic descriptions indistinguishable from human-generated ones. A user evaluation of 25 DHH participants using the model-generated onomatopoeia demonstrated that onomatopoeia significantly improved their video viewing experience. Participants most favored captions with onomatopoeia and category, and expressed a desire to see such captions across genres. We discuss the benefits and challenges of using onomatopoeia in non-speech sound captions, offering insights for future practices.
2
Over the Mouse: Navigating across the GUI with Finger-Lifting Operation Mouse
YoungIn Kim (School of Computing, KAIST, Daejeon, Korea, Republic of)Yohan Yun (KAIST, Daejeon, Korea, Republic of)Taejun Kim (School of Computing, KAIST, Daejeon, Korea, Republic of)Geehyuk Lee (School of Computing, KAIST, Daejeon, Korea, Republic of)
Modern GUIs often have a hierarchical structure, i.e., the z-axis of the GUI interaction space. However, conventional mice do not support effective navigation along the z-axis, leading to increased physical movements and cognitive load. To address this inefficiency, we present the OtMouse, a novel mouse that supports finger-lifting operations by detecting finger height through proximity sensors embedded beneath the mouse buttons, and 'Over the Mouse' (OtM) interface, a set of interaction techniques along the z-axis of the GUI interaction space with the OtMouse. Initially, We evaluated the performance of finger-lifting operations (n = 8) with the OtMouse for two- and three-level lifting discrimination tasks. Subsequently, we conducted a user study (n = 16) to compare the usability of the OtM interface and traditional mouse interface for three representative tasks: 'Context Switch,' 'Video Preview,' and 'Map Zooming.' The results showed that OtM interface was both qualitatively and quantitatively superior to using traditional mouse in the Context Switch and Video Preview tasks. This research contributes to the ongoing efforts to enhance mouse-based GUI navigation experiences.
2
BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds
Jiwan Kim (KAIST, Daejeon, Korea, Republic of)Mingyu Han (UNIST, Ulsan, Korea, Republic of)Ian Oakley (KAIST, Daejeon, Korea, Republic of)
Wireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.
2
``I want to think like an SLP'': A Design Exploration of AI-Supported Home Practice in Speech Therapy
Aayushi Dangol (University of Washington, SEATTLE, Washington, United States)Aaleyah Lewis (University of Washington, Seattle, Washington, United States)Hyewon Suh (University of Washington, Seattle, Washington, United States)Xuesi Hong (University of Washington, SEATTLE, Washington, United States)Hedda Meadan (University of North Carolina at Charlotte, Charlotte, North Carolina, United States)James Fogarty (University of Washington, Seattle, Washington, United States)Julie A.. Kientz (University of Washington, Seattle, Washington, United States)
Parents of children in speech therapy play a crucial role in delivering consistent, high-quality home practice, which is essential for helping children generalize new speech skills to everyday situations. However, this responsibility is often complicated by uncertainties in implementing therapy techniques and keeping children engaged. In this study, we explore how varying levels of AI oversight can provide informational, emotional, and practical support to parents during home speech therapy practice. Through semi-structured interviews with 20 parents, we identified key challenges they face and their ideas for AI assistance. Using these insights, we developed six design concepts, which were then evaluated by 20 Speech-Language Pathologists (SLPs) for their potential impact, usability, and alignment with therapy goals. Our findings contribute to the discourse on AI’s role in supporting therapeutic practices, offering design considerations that address the needs and values of both families and professionals.
2
ViFeed: Promoting Slow Eating and Food Awareness through Strategic Video Manipulation during Screen-Based Dining
Yang Chen (National University of Singapore, Singapore, Singapore)Felicia Fang-Yi Tan (National University of Singapore, Singapore, Singapore)Zhuoyu Wang (National University of Singapore, Singapore, Singapore)Xing Liu (Hangzhou Holographic Intelligence Institute, Hangzhou, China)Jiayi Zhang (National University of Singapore, Singapore, Singapore)Yun Huang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Shengdong Zhao (City University of Hong Kong, Hong Kong, China)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
Given the widespread presence of screens during meals, the notion that digital engagement is inherently incompatible with mindfulness. We demonstrate how the strategic design of digital content can enhance two core aspects of mindful eating: slow eating and food awareness. Our research unfolded in three sequential studies: (1). Zoom Eating Study: Contrary to the assumption that video-watching leads to distraction and overeating, this study revealed that subtle video speed manipulations—can promote slower eating (by 15.31%) and controlled food intake (by 9.65%) while maintaining meal satiation and satisfaction. (2). Co-design workshop: Informed the development of ViFeed, a video playback system strategically incorporating subtle speed adjustments and glanceable visual cues. (3). Field Study: A week-long deployment of ViFeed in daily eating demonstrated its efficacy in fostering food awareness, food appreciation, and sustained engagement. By bridging the gap between ideal mindfulness practices and screen-based behaviors, this work offers insights for designing digital-wellbeing interventions that align with, rather than against, existing habits.
2
Learning Behaviors Mediate the Effect of AI-powered Support for Metacognitive Calibration on Learning Outcomes
HaeJin Lee (University of Illinois at Urbana Champaign, Champaign, Illinois, United States)Frank Stinar (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Ruohan Zong (University of Illinois Urbana-Champaign, Champaign, Illinois, United States)Hannah Valdiviejas (NA, Washington, District of Columbia, United States)Dong Wang (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)Nigel Bosch (University of Illinois at Urbana-Champaign, Champaign, Illinois, United States)
Students struggle with accurately assessing their own performance, especially given little training to do so. We propose an AI-powered training tool to help students improve “metacognitive calibration,” or the ability to accurately predict their own learning, potentially enhancing learning outcomes by enabling students’ use of metacognition-informed learning behaviors. We present results from a randomized controlled trial (N = 133) assessing the effectiveness of the tool in a college-level computer-based learning environment. The AI-driven tool significantly improved learning gains compared to the control group by 8.9% (t = -2.384, p = .019), and this effect was significantly mediated by learning behaviors. Overconfident students who received the intervention showed significantly greater metacognitive calibration improvement than the control group by 4.1% (t = 2.001, p = .049). These insights highlight the value of AI-powered metacognitive calibration training and the importance of promoting specific metacognition-informed learning behaviors in computer-based learning.
2
TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction
Guanyun Wang (Zhejiang University, Hangzhou, China)Chuang Chen (Zhejiang University, HangZhou, China)Xiao Jin (Imperial College London, London, United Kingdom)Yulu Chen (University College London, London, United Kingdom)Yangweizhe Zheng (Northeast Forestry University, Harbin, China)Qianzi Zhen (Zhejiang University, HangZhou, China)Yang Zhang (Imperial College London, London, United Kingdom)Jiaji Li (MIT, Cambridge, Massachusetts, United States)Yue Yang (Zhejiang University, Hangzhou, China)Ye Tao (Hangzhou City University, Hangzhou, China)Shijian Luo (Zhejiang University, Hangzhou, Zhejiang, China)Lingyun Sun (Zhejiang University, Hangzhou, China)
Wood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.
2
Designing Physical Interactions with Triboelectric Material Sensing
Xin Liu (National University of Singapore, Singapore, Singapore)Chengkuo Lee (National University of Singapore, Singapore, Singapore)Clement Zheng (National University of Singapore, Singapore, Singapore)Ching Chiuan Yen (National University of Singapore, Singapore, Singapore, Singapore)
Physical interactions in Human-Computer Interaction (HCI) provide immersive ways for people to engage with technology. However, designers face challenges in integrating physical computing and modeling when designing physical interactions. We explore triboelectric material sensing, a promising technology that addresses these challenges, though its use within the design community remains underexplored. To bridge this gap, we develop a toolkit consisting of triboelectric material pairs, a mechanism taxonomy, a signal processing tool, and computer program templates. We introduce this toolkit to designers in two workshops, where reflections on the design process highlight its effectiveness and inspire innovative interaction designs. Our work contributes valuable resources and knowledge to the design community, making triboelectric sensing more accessible and fostering creativity in physical interaction design.