Can we crowdsource Tacton similarity perception and metaphor ratings?
説明

High-fidelity vibration actuators in recent mobile phones allow designers to crowdsource user evaluation of vibrotactile (VT) Tactons. Yet, little work has examined whether online crowdsourcing platforms can provide comparable results to lab studies. To address this question, we conducted two studies with iOS devices in the lab and crowdsourced settings. In Study I, 40 users provided pairwise similarity ratings for 12 VT Tactons that varied in their parameters (e.g., duration). In Study II, 40 new users rated pairwise similarities for 14 Tactons representing different metaphors (e.g., heartbeat). They also rated the Tactons’ match to the metaphors. In both studies, the resulting similarities and perceptual spaces strongly correlated in the lab and crowdsourced settings. Furthermore,

60% of the metaphor ratings were statistically equivalent in the two settings. We discuss the results and outline directions for future work on haptic crowdsourcing

日本語まとめ
読み込み中…
読み込み中…
HandAvatar: Embodying Non-Humanoid Virtual Avatars through Hands
説明

We propose HandAvatar to enable users to embody non-humanoid avatars using their hands. HandAvatar leverages the high dexterity and coordination of users' hands to control virtual avatars, enabled through our novel approach for automatically-generated joint-to-joint mappings. We contribute an observation study to understand users’ preferences on hand-to-avatar mappings on eight avatars. Leveraging insights from the study, we present an automated approach that generates mappings between users' hands and arbitrary virtual avatars by jointly optimizing control precision, structural similarity, and comfort. We evaluated HandAvatar on static posing, dynamic animation, and creative exploration tasks. Results indicate that HandAvatar enables more precise control, requires less physical effort, and brings comparable embodiment compared to a state-of-the-art body-to-avatar control method. We demonstrate HandAvatar's potential with applications including non-humanoid avatar based social interaction in VR, 3D animation composition, and VR scene design with physical proxies. We believe that HandAvatar unlocks new interaction opportunities, especially for usage in Virtual Reality, by letting users become the avatar in applications including virtual social interaction, animation, gaming, or education.

日本語まとめ
読み込み中…
読み込み中…
Grab It, While You Can: A VR Gesture Evaluation of a Co-Designed Traditional Narrative by Indigenous People
説明

Recent developments in Virtual Reality (VR) applications, such as hand gesture tracking, provide new opportunities to create embodied user experiences. Numerous gesture elicitation studies have been conducted. However, in most instances they lack validation of implemented gestures, as well diversity of participant groups. Our research explores the digitalization of intangible cultural heritage in collaboration with one of the San tribes in Southern Africa. The focus is on particular gestures as embodied interactions of a VR implementation of a traditional San hunting story. In this paper, we present a gesture study, which entails an in-situ elicitation of natural gestures, a co-designed integration, a VR story implementation with grasping and three mid-air gestures, and a user evaluation. Based on our findings, we discuss the anthropological value of gesture implementations determined by an indigenous community, the local usability of a grasping gesture, and in-VR gesture elicitation, as an extension of existing methods.

日本語まとめ
読み込み中…
読み込み中…
HOOV: Hand Out-Of-View Tracking for Proprioceptive Interaction Using Inertial Sensing
説明

Current Virtual Reality systems are designed for interaction under visual control. Using built-in cameras, headsets track the user's hands or hand-held controllers while they are inside the field of view. Current systems thus ignore the user's interaction with off-screen content---virtual objects that the user could quickly access through proprioception without requiring laborious head motions to bring them into focus. In this paper, we present HOOV, a wrist-worn sensing method that allows VR users to interact with objects outside their field of view. Based on the signals of a single wrist-worn inertial sensor, HOOV continuously estimates the user's hand position in 3-space to complement the headset's tracking as the hands leave the tracking range. Our novel data-driven method predicts hand positions and trajectories from just the continuous estimation of hand orientation, which by itself is stable based solely on inertial observations. Our inertial sensing simultaneously detects finger pinching to register off-screen selection events, confirms them using a haptic actuator inside our wrist device, and thus allows users to select, grab, and drop virtual content. We compared HOOV's performance with a camera-based optical motion capture system in two folds. In the first evaluation, participants interacted based on tracking information from the motion capture system to assess the accuracy of their proprioceptive input, whereas in the second, they interacted based on HOOV's real-time estimations. We found that HOOV's target-agnostic estimations had a mean tracking error of 7.7 cm, which allowed participants to reliably access virtual objects around their body without first bringing them into focus. We demonstrate several applications that leverage the larger input space HOOV opens up for quick proprioceptive interaction, and conclude by discussing the potential of our technique.

日本語まとめ
読み込み中…
読み込み中…
Towards a Consensus Gesture Set: A Survey of Mid-Air Gestures in HCI for Maximized Agreement Across Domains
説明

Mid-air gesture-based systems are becoming ubiquitous. Many mid-air gestures control different kinds of interactive devices, applications, and systems. They are, however, still targeted at specific devices in specific domains and are not necessarily consistent across domain boundaries. A comprehensive evaluation of the transferability of gesture vocabulary between domains is also lacking. Consequently, interaction designers cannot decide which gestures to use for which domain. In this systematic literature review, we contribute to the future research agenda in this area, based on an analysis of 172 papers. As part of our analysis, we clustered gestures according to the dimensions of an existing taxonomy to identify their common characteristics in different domains, and we investigated the extent to which existing mid-air gesture sets are consistent across different domains. We derived a consensus gesture set containing 22 gestures based on agreement rates calculation and considered their transferability across different domains.

日本語まとめ
読み込み中…
読み込み中…
AO-Finger: Hands-free Fine-grained Finger Gesture Recognition via Acoustic-Optic Sensor Fusing
説明

Finger gesture recognition is gaining great research interest for wearable device interactions such as smartwatches and AR/VR headsets. In this paper, we propose a hands-free fine-grained finger gesture recognition system AO-Finger based on acoustic-optic sensor fusing. Specifically, we design a wristband with a modified stethoscope microphone and two high-speed optic motion sensors to capture signals generated from finger movements. We propose a set of natural, inconspicuous and effortless micro finger gestures that can be reliably detected from the complementary signals from both sensors. We design a multi-modal CNN-Transformer model for fast gesture recognition (flick/pinch/tap), and a finger swipe contact detection model to enable fine-grained swipe gesture tracking. We built a prototype which achieves an overall accuracy of 94.83% in detecting fast gestures and enables fine-grained continuous swipe gestures tracking. AO-Finger is practical for use as a wearable device and ready to be integrated into existing wrist-worn devices such as smartwatches.

日本語まとめ
読み込み中…
読み込み中…