Interaction Techniques

会議の名前
CHI 2025
Cross, Dwell, or Pinch: Designing and Evaluating Around-Device Selection Methods for Unmodified Smartwatches
要旨

Smartwatches offer powerful features, but their small touchscreens limit the expressiveness of the input that can be achieved. To address this issue, we present, and open-source, the first sonar-based around-device input on an unmodified consumer smartwatch. We achieve this using a fine-grained, one-dimensional sonar-based finger-tracking system. In addition, we use this system to investigate the fundamental issue of how to trigger selections during around-device smartwatch input through two studies. The first examines the methods of double-crossing, dwell, and finger tap in a binary task, while the second considers a subset of these designs in a multi-target task and in the presence and absence of haptic feedback. Results showed double-crossing was optimal for binary tasks, while dwell excelled in multi-target scenarios, and haptic feedback enhanced comfort but not performance. These findings offer design insights for future around-device smartwatch interfaces that can be directly deployed on today’s consumer hardware.

著者
Jiwan Kim
KAIST, Daejeon, Korea, Republic of
Jiwan Son
KAIST, Daejeon, Korea, Republic of
Ian Oakley
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3706598.3714308

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714308

動画
FingerGlass: Enhancing Smart Glasses Interaction via Fingerprint Sensing
要旨

Smart glasses hold immense potential, but existing input methods often hinder their seamless integration into everyday life. Touchpads integrated into the smart glasses suffer from limited input space and precision; voice commands raise privacy concerns and are contextually constrained; vision-based or IMU-based gesture recognition faces challenges in computational cost or privacy concerns. We present FingerGlass, an interaction technique for smart glasses that leverages side-mounted fingerprint sensors to capture fingerprint images. With a combined CNN and LSTM network, FingerGlass identifies finger identity and recognizes four types of gestures (nine in total): sliding, rolling, rotating, and tapping. These gestures, coupled with finger identification, are mapped to common smart glasses commands, enabling comprehensive and fluid text entry and application control. A user study reveals that FingerGlass represents a promising step towards a fresh, discreet, ergonomic, and efficient input interaction with smart glasses, potentially contributing to their wider adoption and integration into daily life.

著者
Zhanwei Xu
Tsinghua University, Beijing, China
Haoxiang Pei
Tsinghua University, Beijing, China
Jianjiang Feng
Tsinghua University, Beijing, China
Jie Zhou
Department of Automation, BNRist, Tsinghua University, Beijing, China
DOI

10.1145/3706598.3713929

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713929

動画
MotionBlocks: Modular Geometric Motion Remapping for More Accessible Upper Body Movement in Virtual Reality
要旨

Movement-based spatial interaction in VR can present significant challenges for people with limited mobility, particularly due to the mismatch between the upper body motion a VR app requires and the user's capabilities. We describe MotionBlocks, an approach which enables 3D spatial input with smaller motions or simpler input devices using modular geometric motion remapping. A formative study identifies common accessibility issues within VR motion design, and informs a design language of VR motions that fall within simple geometric primitives. These 3D primitives enable collapsing spatial or non-spatial input into a normalized input vector, which is then expanded into a second 3D primitive representing larger, more complex 3D motions. An evaluation with people with mobility limitations found that using geometric primitives for highly customized upper body input remapping reduced physical workload, temporal workload, and perceived effort.

著者
Johann Wentzel
University of Waterloo, Waterloo, Ontario, Canada
Alessandra Luz
University of Waterloo, Waterloo, Ontario, Canada
Martez E. Mott
Microsoft Research, Redmond, Washington, United States
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
DOI

10.1145/3706598.3713837

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713837

動画
Reel Feel: Rich Haptic XR Experiences Using an Active, Worn, Multi-String Device
要旨

While many haptic systems have been demonstrated for use in virtual and augmented reality, they most often enable a single category of feedback (e.g., kinematic breaking, object compliance, textures). Combining prior systems to achieve multi-dimensional effects is unwieldy, expensive, and often physically impossible. We believe this is holding back the ubiquity of rich haptics in both the consumer and industrial AR/VR/XR domains. In this work, we describe Reel Feel, a novel, shoulder-worn haptic system capable of rendering rigid geometry, object-bound haptic animations, impulsive forces, surface compliance, and fine-grained spatial effects all in one unified, worn device. Our design aimed to minimize the weight on the hands (<10 g), where a system's mass is most felt, as many prior systems are heavy gloves and exoskeletons. Finally, we sought to keep the device practical, being self-contained, low-cost, and low enough power to be feasible for consumer adoption with a high degree of mobility. In a user evaluation, our device rated better than a conventional vibrotactile baseline for all qualitative measures (immersion, realism, etc.) and allowed participants to more accurately discern object compliance and fine-grained spatial effects.

著者
Nathan DeVrio
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3706598.3713615

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713615

動画
Over the Mouse: Navigating across the GUI with Finger-Lifting Operation Mouse
要旨

Modern GUIs often have a hierarchical structure, i.e., the z-axis of the GUI interaction space. However, conventional mice do not support effective navigation along the z-axis, leading to increased physical movements and cognitive load. To address this inefficiency, we present the OtMouse, a novel mouse that supports finger-lifting operations by detecting finger height through proximity sensors embedded beneath the mouse buttons, and 'Over the Mouse' (OtM) interface, a set of interaction techniques along the z-axis of the GUI interaction space with the OtMouse. Initially, We evaluated the performance of finger-lifting operations (n = 8) with the OtMouse for two- and three-level lifting discrimination tasks. Subsequently, we conducted a user study (n = 16) to compare the usability of the OtM interface and traditional mouse interface for three representative tasks: 'Context Switch,' 'Video Preview,' and 'Map Zooming.' The results showed that OtM interface was both qualitatively and quantitatively superior to using traditional mouse in the Context Switch and Video Preview tasks. This research contributes to the ongoing efforts to enhance mouse-based GUI navigation experiences.

著者
YoungIn Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Yohan Yun
KAIST, Daejeon, Korea, Republic of
Taejun Kim
School of Computing, KAIST, Daejeon, Korea, Republic of
Geehyuk Lee
School of Computing, KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3706598.3713340

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713340

動画
How your Physical Environment Affects Spatial Presence in Virtual Reality
要旨

Virtual reality (VR) is often used in small physical environments, requiring users to remain aware of their environment to avoid injury or damage. However, this can reduce their spatial presence in VR. Previous work and theory lack an account of how the physical environment (PE) affects spatial presence. To address this gap, we investigated the effect on spatial presence of (1) the degree of spatial knowledge of the PE and (2) knowledge of and (3) collisions with obstacles in the PE. Estimates from Bayesian regression models suggest that limiting spatial knowledge of the PE increases spatial presence initially but amplifies the detrimental effect of obstacle collisions. Repeatedly avoiding obstacles further decreases spatial presence, but removing them from the user's path yields a partial recovery. Our work contributes empirical evidence to theories of spatial presence formation and highlights the need to consider the physical environment when designing for presence in VR.

著者
Thomas van Gemert
University of Copenhagen, Copenhagen, Denmark
Jarrod Knibbe
The University of Queensland, St Lucia, QLD, Australia
Eduardo Velloso
University of Sydney, Sydney, New South Wales, Australia
DOI

10.1145/3706598.3714114

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714114

動画
InterFACE: Establishing a Facial Action Unit Input Vocabulary for Hands-Free Extended Reality Interactions, From VR Gaming to AR Web Browsing
要旨

Extended Reality (XR) interactions often rely on spatial hand or controller inputs - necessitating dexterous wrist, hand and finger movements including pressing virtual buttons, pinching to select, and performing hand gestures. However, there are scenarios where such dependencies may render XR devices and apps inaccessible to users - from situational/temporary impairments such as encumbrance, to physical motor impairments. In this paper, we contribute to a growing literature considering facial input as an alternative. In a user study (N=20) we systematically evaluate the usability of 53 Facial Action Units in VR, deriving a set of optimal (comfort, effort, performance) FAUs for interaction. We then use these facial inputs to drive and evaluate (N=10) two demonstrator apps: VR locomotion, and AR web browsing, showcasing how close facial interaction can get to existing baselines, and demonstrating that FAUs offer a viable, generalizable input modality for XR devices.

著者
Graham Wilson
University of Glasgow, Glasgow, United Kingdom
Jamie McCready
University of Glasgow, Glasgow, United Kingdom
Euan Freeman
University of Glasgow, Glasgow, United Kingdom
Florian Mathis
University of St. Gallen, St. Gallen, Switzerland
Harvey Russell
University of Glasgow, Glasgow, United Kingdom
Mark McGill
University of Glasgow, Glasgow, Lanarkshire, United Kingdom
DOI

10.1145/3706598.3713694

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713694

動画