Touching the Future: Haptics and Gestures

会議の名前
UIST 2023
GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VR
要旨

As the use of hand gestures becomes increasingly prevalent in virtual reality (VR) applications, prototyping Compound Freehand Interactions (CFIs) effectively and efficiently has become a critical need in the design process. Compound Freehand Interaction (CFI) is a sequence of freehand interactions where each sub-interaction in the sequence conditions the next. Despite the need for interactive prototypes of CFI in the early design stage, creating them is effortful and remains a challenge for designers since it requires a highly technical workflow that involves programming the recognizers, system responses and conditionals for each sub-interaction. To bridge this gap, we present GestureCanvas, a freehand interaction-based immersive prototyping system that enables a rapid, end-to-end, and code-free workflow for designing, testing, refining, and subsequently deploying CFI by leveraging the three pillars of interaction models: event-driven state machine, trigger-action authoring, and programming by demonstration. The design of GestureCanvas includes three novel design elements — (i) appropriating the multimodal recording of freehand interaction into a CFI authoring workspace called Design Canvas, (ii) semi-automatic identification of the input trigger logic from demonstration to reduce the manual effort of setting up triggers for each sub-interaction, (iii) on the fly testing for independently validating the input conditionals in-situ. We validate the workflow enabled by GestureCanvas through an interview study with professional designers and evaluate its usability through a user study with non-experts. Our work lays the foundation for advancing research on immersive prototyping systems allowing even highly complex gestures to be easily prototyped and tested within VR environments.

著者
Anika Sayara
University of British Columbia, Vancouver, British Columbia, Canada
Emily Lynn. Chen
University of British Columbia, Vancouver, British Columbia, Canada
Cuong Nguyen
Adobe Research, San Francisco, California, United States
Robert Xiao
University of British Columbia, Vancouver, British Columbia, Canada
Dongwook Yoon
University of British Columbia, Vancouver, British Columbia, Canada
論文URL

https://doi.org/10.1145/3586183.3606736

動画
Transferable Microgestures Across Hand Posture and Location Constraints: Leveraging the Middle, Ring, and Pinky Fingers
要旨

Microgestures can enable auxiliary input when the hands are occupied. Although prior work has evaluated the comfort of microgestures performed by the index finger and thumb, these gestures cannot be performed while the fingers are constrained by specific hand locations or postures. As the hand can be freely positioned with no primary posture, partially constrained while forming a pose, or highly constrained while grasping an object at a specific location, we leverage the middle, ring, and pinky fingers to provide additional opportunities for auxiliary input across varying levels of hand constraints. A design space and applications demonstrate how such microgestures can transfer across hand location and posture constraints. An online study evaluated their comfort and effort and a lab study evaluated their use for task-specific microinteractions. The results revealed that many middle finger microgestures were comfortable, and microgestures performed while forming a pose were preferred over baseline techniques.

著者
Nikhita Joshi
University of Waterloo, Waterloo, Ontario, Canada
Parastoo Abtahi
Princeton University, Princeton, New Jersey, United States
Raj Sodhi
Facebook, Menlo Park, California, United States
Nitzan Bartov
Meta, New York, New York, United States
Jackson Rushing
Meta, Toronto, Ontario, Canada
Christopher Collins
Meta, Toronto, Ontario, Canada
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
Michael Glueck
Meta, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3586183.3606713

動画
VoxelHap: A Toolkit for Constructing Proxies Providing Tactile and Kinesthetic Haptic Feedback in Virtual Reality
要旨

Experiencing virtual environments is often limited to abstract interactions with objects. Physical proxies allow users to feel virtual objects, but are often inaccessible. We present the VoxelHap toolkit which enables users to construct highly functional proxy objects using Voxels and Plates. Voxels are blocks with special functionalities that form the core of each physical proxy. Plates increase a proxy’s haptic resolution, such as its shape, texture or weight. Beyond pro- viding physical capabilities to realize haptic sensations, VoxelHap utilizes VR illusion techniques to expand its haptic resolution. We evaluated the capabilities of the VoxelHap toolkit through the construction of a range of fully functional proxies across a variety of use cases and applications. In two experiments with 24 participants, we investigate a subset of the constructed proxies, studying how they compare to a traditional VR controller. First, we investigated VoxelHap’s combined haptic feedback and second, the trade-offs of using ShapePlates. Our findings show that VoxelHap’s proxies outperform traditional controllers and were favored by participants.

著者
Martin Feick
DFKI, Saarland Informatics Campus, Saarbrücken, Germany
Cihan Biyikli
DFKI, Saarland Informatics Campus, Saarbrücken, Germany
Kiran Gani
Saarland Informatics Campus, Saarbrücken, Germany
Anton Wittig
Saarland Informatics Campus, Saarbrücken, Germany
Anthony Tang
Singapore Management University, Singapore, Singapore
Antonio Krüger
DFKI, Saarland Informatics Campus, Saarbrücken, Germany
論文URL

https://doi.org/10.1145/3586183.3606722

動画
TactTongue: Prototyping ElectroTactile Stimulations on the Tongue
要旨

The tongue is a remarkable human organ with a high concentration of taste receptors and an exceptional ability to sense touch. This work uses electro-tactile stimulation to explore the intricate interplay between tactile perception and taste rendering on the tongue. To facilitate this exploration, we utilized a flexible, high-resolution electro-tactile prototyping platform that can be administered in the mouth. We have created a design tool that abstracts users from the low-level stimulation parameters, enabling them to focus on higher-level design objectives. Through this platform, we present the results of three studies. Our first study evaluates the design tool's qualitative and formative aspects. In contrast, the second study measures the qualitative attributes of the sensations produced by our device, including tactile sensations and taste. In the third study, we demonstrate the ability of our device to sense touch input through the tongue when placed on the hard palate region in the mouth. Finally, we present a range of application demonstrators that span diverse domains, including accessibility, medical surgeries, and extended reality. These demonstrators showcase the versatility and potential of our platform, highlighting its ability to enable researchers and practitioners to explore new ways of leveraging the tongue's unique capabilities. Overall, this work presents new opportunities to deploy tongue interfaces and has broad implications for designing interfaces that incorporate the tongue as a sensory organ.

著者
Dinmukhammed Mukashev
University of Calgary, Calgary, Alberta, Canada
Nimesha Ranasinghe
University of Maine, Orono, Maine, United States
Aditya Shekhar Nittala
University of Calgary, Calgary, Alberta, Canada
論文URL

https://doi.org/10.1145/3586183.3606829

動画
Taste Retargeting via Chemical Taste Modulators
要旨

Prior research has explored modifying taste through electrical stimulation. While promising, such interfaces often only elicit taste changes while in contact with the user’s tongue (e.g., cutlery with electrodes), making them incompatible with eating and swallowing real foods. Moreover, most interfaces cannot selectively alter basic tastes, but only the entire flavor profile (e.g., cannot selectively alter bitterness). To tackle this, we propose taste retargeting, a method of altering taste perception by delivering chemical modulators to the mouth before eating. These modulators temporarily change the response of taste receptors to foods, selectively suppressing or altering basic tastes. Our first study identified six accessible taste modulators that suppress salty, umami, sweet, or bitter and transform sour into sweet. Using these findings, we demonstrated an interactive application of this technique with the example of virtual reality, which we validated in our second study. We found that taste retargeting reduced the flavor mismatch between a food prop and other virtual foods.

著者
Jas Brooks
University of Chicago, Chicago, Illinois, United States
Noor Amin
University of Chicago, Chicago, Illinois, United States
Pedro Lopes
University of Chicago, Chicago, Illinois, United States
論文URL

https://doi.org/10.1145/3586183.3606818

動画
Haptic Rendering of Neural Radiance Fields
要旨

The neural radiance field (NeRF) is attracting increasing attentions from researchers in various fields. While NeRF has produced visu- ally plausible results and found its potential applications in virtual reality, users are only allowed to rotate the camera to observe the scene represented as NeRF. We study the haptic interaction with NeRF models in this paper to enable the experience of touching ob- jects reconstructed by NeRF. Existing haptic rendering algorithms do not work well for NeRF-represented models because NeRF is of- ten noisy. We propose a stochastic haptic rendering method to deal with the collision response between the haptic proxy and NeRF. We validate our method with complex NeRF models and experimental results show the efficacy of our proposed algorithm.

著者
Heng Zhang
Southeast University, Nanjing, China
Lifeng Zhu
Southeast University, Nanjing, Jiangsu, China
Yichen Xiang
Southeast University, Nanjing, China
Jianwei Zheng
Southeast University, Nanjing, China
Aiguo Song
Southeast University, Nanjing, Jiangsu, China
論文URL

https://doi.org/10.1145/3586183.3606811

動画