Social touch is a rich channel of human communication, conveying emotion, intent, and meaning embedded in context. Yet most HCI studies treat touch in isolation, overlooking the layered subtleties that shape interpretation. We present a contextual analysis of 5,016 social touch events, grounded in a large collection of annotated scenes from films, dramas, and documentaries. Using a computer vision pipeline, we segmented touch events from video and annotated them across dimensions, including who is involved, how the gesture is performed, where on the body it occurs, and the cultural backdrop. Our analysis shows that identical gestures can convey distinct meanings depending on body location, relationship type, and context. Similar intentions—like comfort, encouragement, or dominance—may be expressed through different gestures or locations, shaped by relational dynamics, cultural norms, and public or private settings. These insights inform the design of socially aware touch technologies, including avatars, social agents, and mediated communication systems.
The rapid advancement of generative AI (GenAI) is expanding access to songwriting, offering a new medium of self-expression for Deaf and Hard-of-Hearing (DHH) individuals. However, emerging technologies that support DHH individuals in expressing themselves through music have largely been evaluated in single-session settings and often fall short in helping users unfamiliar with songwriting convey personal narratives or sustain engagement over time. This paper explores songwriting as an extended, music-based journaling practice that supports sustained emotional reflection over multiple sessions. We introduce SoulNote, a GenAI system enabling DHH to engage in iterative songwriting. Grounded in user-centered design, including a design workshop, a preliminary study, and a multi-session diary study, our findings show that ongoing songwriting with SoulNote facilitated emotional growth across three dimensions: self-insight, emotion regulation, and everyday attitudes toward emotions and self-care. Overall, this work demonstrates how GenAI can support marginalized communities by transforming creative expression into a daily practice of self-discovery and reflection.
Well-designed haptic interactions can improve user experience, but implementing them is often challenging. Haptic authoring tools help alleviate this difficulty, but they are suited for use with different hardware, applications, and practitioners. As such, novices to haptic design need to determine which tools, if any, are suitable for their projects. Similarly, practitioners improving authoring tools and extending them to new contexts need to understand advancements made in prior work to identify opportunities for improvement. Unfortunately, the haptic authoring literature is disorganized, hindering practitioners from succeeding in these tasks. To remedy these shortcomings, we developed a taxonomy of haptic authoring tools, used it to identify gaps in the literature, and analyzed trends in the development of the tools. We also present the systematic literature review and study with thirteen haptics practitioners that were used to produce these results, then discuss what they mean for future research in haptic authoring.
Vibrotactile experiences (VTX) consist of a multitude of design parameters and experiential dimensions that can be challenging to communicate visually. To understand how this is commonly done in scientific communication, we systematically reviewed VTX illustrations in academic publications. Using inductive and deductive methods, we built a taxonomy detailing characteristics of VTX illustrations that focuses on what is illustrated and how it is depicted. Using the taxonomy, we coded a total of 768 figures spanning 409
publications. These results indicate that (1) half of the illustrations communicate on the timing of vibrotactile feedback with regards to users’ actions, (2) illustrations depict stimuli rather than experiences and infrequently communicate multimodal aspects of the
experiences, and (3) contextual information of vibrotactile displays and experiential aspects are often distributed across several complementary figures. We conclude by discussing the benefits and limitations of this taxonomy to support the design process.
Digital images remain largely inaccessible to blind or visually impaired (BVI) people because alt-text rarely conveys how %objects For-TAPS - materials
materials feel or sound. We augment material images with multimodal vibrotactile patterns and evaluate four generation pipelines.
AP1: prompt with one-shot example,
AP2: prompt to audio, then pattern,
AP3: real finger–material recording to pattern, and
AP4: patterns from a public haptic database.
A custom multilocal vibrotactile tablet played patterns on 10 material images (e.g., wood, stone, glass). Eight BVI participants explored each image with four patterns and ranked the best match. Think-aloud feedback highlighted:
Theme 1 (realism — rough/grainy for wood and stone; smooth/steady for glass), Theme 2 (distinctiveness — separable cues; uniform buzzes were criticized), Theme 3 (personal associations), Theme 4 (effort/calibration for faint/noisy patterns; intensity tuning), and Theme 5 (preferences/suggestions).
AP3 felt most authentic; AI patterns aided clarity but seemed stylized. Exploratory ranks (n=8) echoed hybrid, user-tunable pipelines for accessible material perception (AP3 Median 3/4, AI Medians 2/4).
Tactile exploration is essential for blind and low vision (BLV) individuals to understand objects and spaces. Yet little is known about how camera-based devices can support hand-centric exploration: tactilely examining exhibits while inquiring about and processing information. We investigate a finger-worn ring camera that captures images from the palm side while allowing tactile exploration, comparing it with hand-centered smartphones. We conducted a Wizard-of-Oz study with 11 BLV participants in a science museum.
Results showed that the ring camera supported effective bimanual strategies: exploring with both hands, lifting the camera-worn hand while keeping the other as an anchor during inquiry, and resuming bimanual touch for information processing. In contrast, smartphones led to effortful, fragmented exploration. Building on these findings, we developed an interactive system and evaluated its reliability and practicality with 6 BLV participants. We contribute insights and design implications for wearable camera systems that augment tactile exploration in real-world settings.
We explore how humanoid robots can be repurposed as haptic media, extending beyond their conventional role as social, assistive, collaborative agents. To illustrate this approach, we implemented HumanoidTurk, taking a first step toward a humanoid-based haptic system that translates in-game g-force signals into synchronized motion feedback in VR driving. A pilot study involving six participants compared two synthesis methods, leading us to adopt a filter-based approach for smoother and more realistic feedback. A subsequent study with sixteen participants evaluated four conditions: no-feedback, controller, humanoid+controller, and human+controller. Results showed that humanoid feedback enhanced immersion, realism, and enjoyment, while introducing moderate costs in terms of comfort and simulation sickness. Interviews further highlighted the robot’s consistency and predictability in contrast to the adaptability of human feedback. From these findings, we identify fidelity, adaptability, and versatility as emerging themes, positioning humanoids as a distinct haptic modality for immersive VR.