Reflecting on Haptics

会議の名前
CHI 2026
Touch with Meaning: A Contextual Analysis of Social Touch
要旨

Social touch is a rich channel of human communication, conveying emotion, intent, and meaning embedded in context. Yet most HCI studies treat touch in isolation, overlooking the layered subtleties that shape interpretation. We present a contextual analysis of 5,016 social touch events, grounded in a large collection of annotated scenes from films, dramas, and documentaries. Using a computer vision pipeline, we segmented touch events from video and annotated them across dimensions, including who is involved, how the gesture is performed, where on the body it occurs, and the cultural backdrop. Our analysis shows that identical gestures can convey distinct meanings depending on body location, relationship type, and context. Similar intentions—like comfort, encouragement, or dominance—may be expressed through different gestures or locations, shaped by relational dynamics, cultural norms, and public or private settings. These insights inform the design of socially aware touch technologies, including avatars, social agents, and mediated communication systems.

著者
Ayush Bhardwaj
The University of Texas at Dallas, Richardson, Texas, United States
Ashish Pratap
University of Texas at Dallas, Richardson, Texas, United States
Abbas Khawaja
The University of Texas at Dallas, Richardson, Texas, United States
Yapeng Tian
University of Texas at Dallas, Richardson, Texas, United States
Uison Ju
Pohang University of Science and Technology, Pohang, Gyungsangbuk, Korea, Republic of
Dajin Lee
Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of
Seungmoon Choi
Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk, Korea, Republic of
Jin Ryong Kim
University of Texas at Dallas, Richardson, Texas, United States
From Daily Song to Daily Self: Supporting Emotional Growth of Deaf and Hard-of-Hearing Individuals through Generative AI Songwriting
要旨

The rapid advancement of generative AI (GenAI) is expanding access to songwriting, offering a new medium of self-expression for Deaf and Hard-of-Hearing (DHH) individuals. However, emerging technologies that support DHH individuals in expressing themselves through music have largely been evaluated in single-session settings and often fall short in helping users unfamiliar with songwriting convey personal narratives or sustain engagement over time. This paper explores songwriting as an extended, music-based journaling practice that supports sustained emotional reflection over multiple sessions. We introduce SoulNote, a GenAI system enabling DHH to engage in iterative songwriting. Grounded in user-centered design, including a design workshop, a preliminary study, and a multi-session diary study, our findings show that ongoing songwriting with SoulNote facilitated emotional growth across three dimensions: self-insight, emotion regulation, and everyday attitudes toward emotions and self-care. Overall, this work demonstrates how GenAI can support marginalized communities by transforming creative expression into a daily practice of self-discovery and reflection.

著者
Youjin Choi
Gwangju Institute of Science and Technology, Gwangju, Buk-gu, Korea, Republic of
JinYoung Yoo
Gwangju Institute of Science and Technology, Gwangju, Buk-gu, Korea, Republic of
JaeYoung Moon
Gwangju Institute of Science and Technology, Gwangju, Buk-gu, Korea, Republic of
Yoonjae Kim
Gwangju Institute of Science and Technology, Gwangju, Buk-gu, Korea, Republic of
Eun Young Lee
Ewha Womans University, Seoul, Korea, Republic of
Jennifer G. Kim
Georgia Institute of Technology, Atlanta, Georgia, United States
Jin-Hyuk Hong
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Understanding Haptic Authoring Tools through a Taxonomy and Descriptive Analysis
要旨

Well-designed haptic interactions can improve user experience, but implementing them is often challenging. Haptic authoring tools help alleviate this difficulty, but they are suited for use with different hardware, applications, and practitioners. As such, novices to haptic design need to determine which tools, if any, are suitable for their projects. Similarly, practitioners improving authoring tools and extending them to new contexts need to understand advancements made in prior work to identify opportunities for improvement. Unfortunately, the haptic authoring literature is disorganized, hindering practitioners from succeeding in these tasks. To remedy these shortcomings, we developed a taxonomy of haptic authoring tools, used it to identify gaps in the literature, and analyzed trends in the development of the tools. We also present the systematic literature review and study with thirteen haptics practitioners that were used to produce these results, then discuss what they mean for future research in haptic authoring.

著者
Juliette Regimbal
McGill University, Montréal, Quebec, Canada
Cyan Kuo
McGill University, Montreal, Quebec, Canada
Jeremy R. Cooperstock
McGill University, Montreal, Quebec, Canada
How are Vibrotactile Experiences Visually Represented? A Taxonomy of Illustration Characteristics
要旨

Vibrotactile experiences (VTX) consist of a multitude of design parameters and experiential dimensions that can be challenging to communicate visually. To understand how this is commonly done in scientific communication, we systematically reviewed VTX illustrations in academic publications. Using inductive and deductive methods, we built a taxonomy detailing characteristics of VTX illustrations that focuses on what is illustrated and how it is depicted. Using the taxonomy, we coded a total of 768 figures spanning 409 publications. These results indicate that (1) half of the illustrations communicate on the timing of vibrotactile feedback with regards to users’ actions, (2) illustrations depict stimuli rather than experiences and infrequently communicate multimodal aspects of the experiences, and (3) contextual information of vibrotactile displays and experiential aspects are often distributed across several complementary figures. We conclude by discussing the benefits and limitations of this taxonomy to support the design process.

受賞
Best Paper
著者
Bruno Fruchard
Univ. Lille, Inria, CNRS, Centrale Lille, F-59000 Lille, France
Dennis Wittchen
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Nihar Sabnis
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Paul Strohmeier
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Donald Degraen
University of Canterbury, Christchurch, New Zealand
Augmenting Imagery with Multimodal Vibrotactile Representations: Touch, Feel, and Hear
要旨

Digital images remain largely inaccessible to blind or visually impaired (BVI) people because alt-text rarely conveys how %objects For-TAPS - materials materials feel or sound. We augment material images with multimodal vibrotactile patterns and evaluate four generation pipelines. AP1: prompt with one-shot example, AP2: prompt to audio, then pattern, AP3: real finger–material recording to pattern, and AP4: patterns from a public haptic database. A custom multilocal vibrotactile tablet played patterns on 10 material images (e.g., wood, stone, glass). Eight BVI participants explored each image with four patterns and ranked the best match. Think-aloud feedback highlighted: Theme 1 (realism — rough/grainy for wood and stone; smooth/steady for glass), Theme 2 (distinctiveness — separable cues; uniform buzzes were criticized), Theme 3 (personal associations), Theme 4 (effort/calibration for faint/noisy patterns; intensity tuning), and Theme 5 (preferences/suggestions). AP3 felt most authentic; AI patterns aided clarity but seemed stylized. Exploratory ranks (n=8) echoed hybrid, user-tunable pipelines for accessible material perception (AP3 Median 3/4, AI Medians 2/4).

著者
Mazen Salous
OFFIS Institute for Information technology, Oldenburg, Germany
Matthias Kramer
OFFIS Institute for Information Technology, Oldenburg, Germany
Wilko Heuten
OFFIS - Institute for Information Technology, Oldenburg, Germany
Charles Hudin
CEA Tech, Gif sur Yvettes, France
Susanne Boll
University of Oldenburg, Oldenburg, Germany
Larbi Abdenebaoui
OFFIS Institute for Information Technology, Oldenburg, Germany
Eyes on the Finger: Investigating a Ring-Shaped Camera for Seamless Accessible Tactile Exploration
要旨

Tactile exploration is essential for blind and low vision (BLV) individuals to understand objects and spaces. Yet little is known about how camera-based devices can support hand-centric exploration: tactilely examining exhibits while inquiring about and processing information. We investigate a finger-worn ring camera that captures images from the palm side while allowing tactile exploration, comparing it with hand-centered smartphones. We conducted a Wizard-of-Oz study with 11 BLV participants in a science museum. Results showed that the ring camera supported effective bimanual strategies: exploring with both hands, lifting the camera-worn hand while keeping the other as an anchor during inquiry, and resuming bimanual touch for information processing. In contrast, smartphones led to effortful, fragmented exploration. Building on these findings, we developed an interactive system and evaluated its reliability and practicality with 6 BLV participants. We contribute insights and design implications for wearable camera systems that augment tactile exploration in real-world settings.

著者
Ayaka Tsutsui
University of Tsukuba, Tsukuba, Japan
Xiyue Wang
Miraikan - The National Museum of Emerging Science and Innovation, Tokyo, Japan
Hironobu Takagi
IBM Research - Tokyo, Tokyo, Japan
Yoichi Ochiai
University of Tsukuba, Tsukuba, Japan
Chieko Asakawa
IBM, Yorktown Heights, New York, United States
HumanoidTurk: Expanding VR Haptics with Humanoids for Driving Simulations
要旨

We explore how humanoid robots can be repurposed as haptic media, extending beyond their conventional role as social, assistive, collaborative agents. To illustrate this approach, we implemented HumanoidTurk, taking a first step toward a humanoid-based haptic system that translates in-game g-force signals into synchronized motion feedback in VR driving. A pilot study involving six participants compared two synthesis methods, leading us to adopt a filter-based approach for smoother and more realistic feedback. A subsequent study with sixteen participants evaluated four conditions: no-feedback, controller, humanoid+controller, and human+controller. Results showed that humanoid feedback enhanced immersion, realism, and enjoyment, while introducing moderate costs in terms of comfort and simulation sickness. Interviews further highlighted the robot’s consistency and predictability in contrast to the adaptability of human feedback. From these findings, we identify fidelity, adaptability, and versatility as emerging themes, positioning humanoids as a distinct haptic modality for immersive VR.

著者
DaeHo Lee
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
Ryo Suzuki
University of Colorado Boulder, Boulder, Colorado, United States
Jin-Hyuk Hong
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of