Moisture Transfer: A Perceptual Wetness Illusion Through Thermal and Wet Integration
説明

Simulating wetness in interactive systems is challenging due to the lack of dedicated hygroreceptors in human skin and the complexity of delivering physical moisture. We introduce Moisture Transfer, a perceptual wetness illusion in which users feel moisture at a dry site when cold and wet stimuli are applied nearby. This illusion arises from the brain’s spatial integration of thermal and tactile cues, offering a new pathway to render wetness without direct contact. We investigate this illusion by establishing it with a single finger and show that thermal congruence enhances perceived wetness. We then explored its spatial extent across five fingers, revealing lateral transfer of wetness. Finally, we applied these findings to create a proof-of-concept VR interface that evokes full-hand wetness using minimal actuation. We conclude with design implications for XR and wearable systems and outline future work exploring body-wide wetness illusions and multisensory integration.

日本語まとめ
読み込み中…
読み込み中…
Can We Infer Object Pose Changes from Hand Movements?
説明

Some Augmented Reality applications require tracking physical objects to anchor virtual elements to them. Despite significant progress in computer vision, achieving robust tracking of manipulated objects remains challenging, notably due to occlusions caused by the hands. Yet the hands carry valuable information: the way an object is grasped and moved is reflected in their shape and motion. We explore the following question: can we infer changes to an object's position and orientation from the shape and movements of the hand manipulating it? We investigate this general approach, which we call Object-from-Hand (ObHa), building three probes, drawing on insights from experimental psychology research on grasping, on our own empirical studies, and on an analysis of the extensive HOT3D dataset of everyday object manipulations. We then discuss the approach's potential either as a complement to vision-based pose tracking solutions or as a coarse standalone pose tracking solution.

日本語まとめ
読み込み中…
読み込み中…
AgentHands: Generating Interactive Hand Gestures for Spatially Grounded Agent Conversations in XR
説明

Communicating spatial tasks via text or speech creates ``a mental mapping gap'' that limits an agent’s expressiveness. Inspired by co-speech gestures in face-to-face conversation, we propose \textsc{AgentHands}, an LLM-powered XR system that equips agents with hands to render responses clearer and more engaging. Guided by a design taxonomy distilled from a formative study (N=10), we implement a novel pipeline to generate and render a hand agent that augments conversational responses with synchronized, space-aware, and interactive hand gestures: using a meta-instruction, \textsc{AgentHands} generates verbal responses embedded with \textit{GestureEvents} aligned to specific words; each event specifies gesture type and parameters. At runtime, a parser converts events into time-stamped poses and motions, driving an animation system that renders expressive hands synchronized with speech. In a within-subjects study (N=12), \textsc{AgentHands} increased engagement and made spatially grounded conversations easier to follow compared to a speech-only baseline.

日本語まとめ
読み込み中…
読み込み中…
JustShape: Exploring Co-Speech Gestures for Multimodal LLM-Powered 3D Parametric Modeling
説明

Parametric modeling is a prevailing 3D modeling approach in design, architecture, and engineering.

The emergence of multimodal large language models (LLMs) brings a new opportunity to lower the entry barriers to this powerful tool.

However, describing 3D geometries through natural language can be fuzzy and challenging.

We introduce co-speech gesture, a natural and expressive interaction modality to complement text prompts for LLM-empowered generative parametric modeling.

We first conducted an elicitation study to explore and categorize co-speech gesture expressions.

Based on the findings, we designed a multimodal fusion pipeline that parametrizes gestures and synthesizes them with speech.

This approach reduces language ambiguity by translating implicit user intentions into explicit parametric attributes, thus lifting the model generation performance.

We conducted a two-session user study testing and comparing it with traditional language and sketch inputs.

This work streamlines the parametric modeling workflow and explores novel multimodal interaction paradigms for LLM-empowered design and creation.

日本語まとめ
読み込み中…
読み込み中…
MorsEar: Toward Generalizable Low-Resource Covert Messaging via Earable based Inertial Sensing
説明

Silent, eyes-free text entry remains challenging when speech and touch are impractical. Prior wearable systems required custom sensors or limited users to a small vocabulary. We present MorsEar, an IMU-only earable framework that maps near-ear micro-gestures such as taps for dot/dash; slide/pull/circle for space/delete/send, into character-level Morse, enabling unrestricted composition with a compact lexicon for lightweight on-device autocorrect. The result is a low-bandwidth, reduced-exposure communication channel that works eyes-free and voice-free in accessibility scenarios, silent zones, and constrained environments. MorsEar infers words using a physics-aware preprocessing stack and a compact CNN, feeding a tempo-adaptive segmentation with rolling buffers; an on-device decoder provides real-time feedback entirely on-phone. In a 24-participant study (with four accessibility users) across Silent, Cafe, and Metro, MorsEar achieved CER 7.3% and WER 12.5% → 7.8% (Autocorrect), with median WPM of 9.3/9.1/5.8, respectively. Similar to other accessibility-oriented encodings such as Braille, Morse requires a brief familiarization period to learn the timing and rhythm of dots and dashes; after which, MorsEar shows that commodity earable IMUs can support discreet, low-exposure text entry that scales beyond discrete commands to language-level interaction.

日本語まとめ
読み込み中…
読み込み中…
Which is Warmer, the Cake or the Oven? Unlocking Thermal Conductivity for Virtual Reality Interaction
説明

Thermal feedback has the potential to enrich immersive interaction, yet its role in material discrimination remains underexplored. We designed and evaluated a conductivity model that simulates transient heating and cooling profiles based on material properties of an object. Thirty-eight participants used a Meta Quest 3 headset and WEART TouchDIVER Pro gloves to classify virtual blocks (metal, glass, wood) under three conditions: visual-thermal congruence, thermal-only, and visual-thermal incongruence. Although the results show accuracy decreased when congruence decreased, the objects were rated consistently following their material conductivity properties (Metal > Glass > Wood), supporting the validity of the conductivity model. Haptic Experience (HX) ratings of realism, involvement, and harmony remained stable across tasks, while sorting difficulty was lowest under congruent and highest under incongruent visuals. Physiological baselines did affect performance. Our findings demonstrate that conductivity-based thermal rendering enables perceptually reliable material differences in VR, informing design and application of thermal haptics.

日本語まとめ
読み込み中…
読み込み中…
Thermal Masking Across the Human Body: Patterns, Pathways, and Perceptual Boundaries
説明

Thermal masking, a vibration-induced illusion in which concurrent tactile input induces a vivid thermal sensation at the tactile site, is a promising mechanism for wearable interfaces and extended reality because it can deliver rich thermal feedback with minimal hardware. While prior work has examined this phenomenon on limited body parts, its expression across the full body remains under exploration. We present four studies mapping thermal masking across eight regions: head, face, neck, arms, hands, torso, legs, and feet. Results show that masking strength is location-dependent, producing perceptual patterns that align primarily with somatosensory pathways rather than proximity. On smaller regions such as the fingers, masking was localized, while on larger areas such as the torso and neck, it extended more broadly. Dorsal–ventral and inter-body tests revealed viable pairings and perceptual boundaries. These findings provide the first comprehensive atlas of body-wide thermal masking, advancing understanding and guiding efficient thermal–tactile interface design.

日本語まとめ
読み込み中…
読み込み中…