Hand Interaction

会議の名前
CHI 2024
EITPose: Wearable and Practical Electrical Impedance Tomography for Continuous Hand Pose Estimation
要旨

Real-time hand pose estimation has a wide range of applications spanning gaming, robotics, and human-computer interaction. In this paper, we introduce EITPose, a wrist-worn, continuous 3D hand pose estimation approach that uses eight electrodes positioned around the forearm to model its interior impedance distribution during pose articulation. Unlike wrist-worn systems relying on cameras, EITPose has a slim profile (12 mm thick sensing strap) and is power-efficient (consuming only 0.3 W of power), making it an excellent candidate for integration into consumer electronic devices. In a user study involving 22 participants, EITPose achieves with a within-session mean per joint positional error of 11.06 mm. Its camera-free design prioritizes user privacy, yet it maintains cross-session and cross-user accuracy levels comparable to camera-based wrist-worn systems, thus making EITPose a promising technology for practical hand pose estimation.

著者
Alexander Kyu
Human Computer Interaction Institute, Pittsburgh, Pennsylvania, United States
Hongyu Mao
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Junyi Zhu
MIT CSAIL, Cambridge, Massachusetts, United States
Mayank Goel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Karan Ahuja
Northwestern University, Evanston, Illinois, United States
論文URL

https://doi.org/10.1145/3613904.3642663

動画
EchoWrist: Continuous Hand Pose Tracking and Hand-Object Interaction Recognition Using Low-Power Active Acoustic Sensing On a Wristband
要旨

Our hands serve as a fundamental means of interaction with the world around us. Therefore, understanding hand poses and interaction contexts is critical for human-computer interaction (HCI). We present EchoWrist, a low-power wristband that continuously estimates 3D hand poses and recognizes hand-object interactions using active acoustic sensing. EchoWrist is equipped with two speakers emitting inaudible sound waves toward the hand. These sound waves interact with the hand and its surroundings through reflections and diffractions, carrying rich information about the hand's shape and the objects it interacts with. The information captured by the two microphones goes through a deep learning inference system that recovers hand poses and identifies various everyday hand activities. Results from the two 12-participant user studies show that EchoWrist is effective and efficient at tracking 3D hand poses and recognizing hand-object interactions. Operating at 57.9 mW, EchoWrist can continuously reconstruct 20 3D hand joints with MJEDE of 4.81 mm and recognize 12 naturalistic hand-object interactions with 97.6% accuracy.

著者
Chi-Jung Lee
Cornell University, Ithaca, New York, United States
Ruidong Zhang
Cornell University, Ithaca, New York, United States
Devansh Agarwal
Cornell University, Ithaca, New York, United States
Tianhong Catherine. Yu
Cornell University, Ithaca, New York, United States
Vipin Gunda
Cornell University, Ithaca, New York, United States
Oliver Lopez
Cornell University, Ithaca, New York, United States
James Kim
Cornell University, Ithaca, New York, United States
Sicheng Yin
Cornell university, Ithaca, New York, United States
Boao Dong
Cornell University, Ithaca, New York, United States
Ke Li
Cornell University, Ithaca, New York, United States
Mose Sakashita
Cornell University, Ithaca, New York, United States
Francois Guimbretiere
Cornell , Ithaca, New York, United States
Cheng Zhang
Cornell University, Ithaca, New York, United States
論文URL

https://doi.org/10.1145/3613904.3642910

動画
Single-handed Folding Interactions with a Modified Clamshell Flip Phone
要旨

We explore and evaluate single-handed folding interactions suitable for “modified clamshell flip phones” with a full screen touch display that folds in half along the short dimension. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble clamshell flip phones, but with a modified hinge and spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures to discover which are most promising. To demonstrate how folding interactions could be incorporated into flip phone interfaces, applications such as map browsing, text editing, and menu shortcuts are described.

著者
Yen-Ting Yeh
University of Waterloo, Waterloo, Ontario, Canada
Antony Albert Raj Irudayaraj
University of Waterloo, Waterloo, Ontario, Canada
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
論文URL

https://doi.org/10.1145/3613904.3642554

動画
Emotion Embodied: Unveiling the Expressive Potential of Single-Hand Gestures
要旨

Hand gestures are widely used in daily life for expressing emotions, yet gesture input is not part of existing emotion tracking systems. To seek a practical and effortless way of using gestures to inform emotions, we explore the relationships between gestural features and commonly experienced emotions by focusing on single-hand gestures that are easy to perform and capture. First, we collected 756 gestures (in photo and video pairs) from 63 participants who expressed different emotions in a survey, and then interviewed 11 of them to understand their gesture-forming rationales. We found that the valence and arousal level of the expressed emotions significantly correlated with participants' finger-pointing direction and their gesture strength, and synthesized four channels through which participants externalized their expressions with gestures. Reflecting on the findings, we discuss how emotions can be characterized and contextualized with gestural cues and implications for designing multimodal emotion tracking systems and beyond.

受賞
Honorable Mention
著者
Yuhan Luo
City University of Hong Kong, Hong Kong, China
Junnan Yu
The Hong Kong Polytechnic University, Hong Kong, China
Minhui Liang
City University of Hong Kong, Hong Kong, China
Yichen Wan
Hong Kong Polytechnic University, Hong Kong, Hong Kong
Kening Zhu
City University of Hong Kong, HongKong, China
Shannon Sie. Santosa
City University of Hong Kong, Kowloon Tong, Hong Kong
論文URL

https://doi.org/10.1145/3613904.3642255

動画
Hand Gesture Recognition for Blind Users by Tracking 3D Gesture Trajectory
要旨

Hand gestures provide an alternate interaction modality for blind users and can be supported using commodity smartwatches without requiring specialized sensors. The enabling technology is an accurate gesture recognition algorithm, but almost all algorithms are designed for sighted users. Our study shows that blind user gestures are considerably different from sighted users, rendering current recognition algorithms unsuitable. Blind user gestures have high inter-user variance, making learning gesture patterns difficult without large-scale training data. Instead, we design a gesture recognition algorithm that works on a 3D representation of the gesture trajectory, capturing motion in free space. Our insight is to extract a micro-movement in the gesture that is user-invariant and use this micro-movement for gesture classification. To this end, we develop an ensemble classifier that combines image classification with geometric properties of the gesture. Our evaluation demonstrates a 92% classification accuracy, surpassing the next best state-of-the-art which has an accuracy of 82%.

著者
Prerna Khanna
Stony Brook University , Stony Brook, New York, United States
IV Ramakrishnan
Stony Brook University, Stony Brook, New York, United States
Shubham Jain
Stony Brook University, Stony Brook, New York, United States
Xiaojun Bi
Stony Brook University, Stony Brook, New York, United States
Aruna Balasubramanian
Stony Brook University, Stony Brook, New York, United States
論文URL

https://doi.org/10.1145/3613904.3642602

動画