As modern tabletop play becomes more hybrid through the integration of digital tools, hybrid digital boardgames (HDBs) – games which mix physical and digital components – can be seen as ''gimmicky''. Previous work has explored the use of technology in hybrid play settings, but relatively little work exists on what makes hybridity meaningful in HDBs. In this paper, we present a model for understanding how meaningful hybridity is constructed through the relationship between the technology, game, and player. Over twelve months, we convened a monthly Critical Play Reference Group of 21 local players to play and discuss a curated selection of HDBs. We analysed 37 semi-structured group interviews for qualities of meaningful hybridity across 25 unique published HDBs. This model identifies what players assess in their HDB experience and how that maps to their overall perception of hybridity, informing the design and evaluation of meaningful hybrid game experiences.
Dexterous finger movements are critical for both everyday and specialized tasks. However, acquiring such skills is challenging, as it requires accurate sequence memory and fine finger coordination. Existing haptic training systems typically employ demonstration feedback, which physically guides correct movements, or post-error correction that intervenes after errors occur. While effective, these approaches can reduce learners’ autonomy or expose novices to repeated errors, which can harm motivation.
We introduce FIXical I/O, a magnetic hand exoskeleton that enables three error feedback strategies by combining real-time motion sensing with electromagnet-based actuation: Preemptive Error Correction (nudging fingers away from incorrect actions), Preemptive Error Blocking (constraining erroneous movements before execution), and Post-Error Correction.
We conducted a user study comparing these strategies in terms of learning performance and subjective experiences, such as perceived performance and sense of agency, thereby demonstrating the benefits of Preemptive Error Correction and providing design implications.
The absence of physical information during hand-object interaction in a virtual environment diminishes realism and immersion. Kinesthetic haptic feedback has proven effective in delivering realistic object-derived haptic cues, enhancing the overall virtual reality (VR) experience. Here, we propose kinesthetic illusion through a novel application of finger tendon vibration (FTV), which creates an illusory sensation of finger movement. To effectively apply FTV for virtual object interactions, we first examine the effects of short-duration FTV (<5 s) through 3 perception studies. Based on study results, we design 6 exemplary VR scenarios, representing the overall design space of VR object interactions, and 4 different haptic rendering strategies for FTV. We evaluated these rendering methods on each VR scenario and derived a design guideline for FTV application. We then compared FTV with no vibration and simple vibration, observing that FTV enhances VR experience by providing realistic resistance on the finger, greatly improving body ownership.
Interactive electrical-muscle-stimulation (EMS) supports motor-
skills by actuating the user’s muscles. However, existing EMS-
interfaces exclusively focus on demonstrating movements/sequences
(e.g., which fingers to actuate to play a piano melody) and have
not investigated EMS for skills requiring precise force application
(e.g., playing musical instruments, practicing culinary techniques,
operating force-sensitive tools). Our user study found that when
EMS-interfaces demonstrate a force, participants trying to recall
this force, overshoot by a median 19%; with especially larger over-
shoots at lower target-forces (e.g., produce a∼1.2 kg force, after a
1 kg demonstration). This force mismatch renders EMS-interfaces
unable to accurately demonstrate forces—drastically limiting the
growing potential of EMS for HCI. To significantly improve on
this, we modeled users’ recall of EMS-demonstrated forces. This
model allows to adjust EMS-interfaces to render a target force that,
when recalled, matches the intended force best—in our study, this
improved median force recall by∼35%.
Perceiving material properties such as elasticity, flexibility, and torsion is inherently bimanual, as we rely on the relative motion of our hands to form a unified sense of materiality. Yet, most vibrotactile material rendering approaches are limited to a single hand or finger. While prior work has explored bimanual haptic interfaces, most depend on specialized hardware for specific interactions. In this paper, we demonstrate design strategies to support bimanual material exploration through motion-coupled vibrotactile feedback. Our technique introduces variable crosstalk between the controllers' vibration to evoke connectedness, making two unconnected devices feel as though they manipulate a single object. The technique generalizes motion-coupled feedback approaches beyond previous single-point explorations. Through two user studies, we show that this approach (1) significantly enhances perceived connectedness and (2) conveys distinct material qualities such as elasticity and torsion. Finally, we present \textit{Dvihastīya}, an authoring tool for designing connected bimanual experiences in virtual reality.
The sense of agency, the internal feeling of controlling one's actions and their outcomes, fundamentally shapes user interaction in virtual environments. Although extensively studied for its subjective impact, agency’s implicit yet critical role in guiding visual-spatial attention has been largely overlooked. This study investigated whether the sense of agency, conferred by prior active control, enhances subsequent visual search efficiency for the previously controlled stimulus under conditions of attentional, temporal, and spatial perturbation. Our results show that agency-driven attentional benefits are remarkably robust, persisting despite competing salient visual distractors, delayed outcomes, and changes in spatial layout. Furthermore, when a delay was introduced between the control and visual search, the agency effect attenuated for targets presented beyond the operators’ peripersonal space. These findings provide valuable insights for constructing immersive user experiences and advancing theoretical frameworks in human-computer interaction, suggesting efficient strategies to support sustained user engagement in virtual and augmented reality.
Cross-sensory correspondences provide opportunities for designing rich sensory HCI, with prior work showing that features such as roundness and sharpness are systematically linked to language, color, sound, and emotion. Yet two challenges remain: few technologies can dynamically transition between these features, and little is known about the thresholds at which a form is judged as sufficiently rounded or spiky to realize these cross-sensory effects. We present triMorph, a pneumatic shape-changing interface capable of smoothly morphing between spiky, flat, and rounded configurations. In a psychophysical study with 30 participants, we quantified perceptual accuracy and precision in mapping triMorph shapes to visual-linguistic categories and examined shape–color and shape–emotion correspondences. Results reveal threshold values for reliable categorization, with rounded shapes linked to pleasant emotions and lighter colors, and spiky shapes to arousal and darker tones. Our findings provide empirical foundations and design guidelines for grounding shape-changing artifacts more firmly in cross-sensory cognition.