Sensing

会議の名前
CHI 2022
SonarID: Using Sonar to Identify Fingers on a Smartwatch
要旨

The diminutive size of wrist wearables has prompted the design of many novel input techniques to increase expressivity. Finger identification, or assigning different functionality to different fingers, has been frequently proposed. However, while the value of the technique seems clear, its implementation remains challenging, often relying on external devices (e.g., worn magnets) or explicit instructions. Addressing these limitations, this paper explores a novel approach to natural and unencumbered finger identification on an unmodified smartwatch: sonar. To do this, we adapt an existing finger tracking smartphone sonar implementation---rather than extract finger motion, we process raw sonar fingerprints representing the complete sonar scene recorded during a touch. We capture data from 16 participants operating a smartwatch and use their sonar fingerprints to train a deep learning recognizer that identifies taps by the thumb, index, and middle fingers with an accuracy of up to 93.7%, sufficient to support meaningful application development.

著者
Jiwan Kim
UNIST, Ulsan, Korea, Republic of
Ian Oakley
UNIST, Ulsan, Korea, Republic of
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501935

動画
SilentSpeller: Towards mobile, hands-free, silent speech text entry using electropalatography
要旨

Speech is inappropriate in many situations, limiting when voice control can be used. Most unvoiced speech text entry systems can not be used while on-the-go due to movement artifacts. Using a dental retainer with capacitive touch sensors, SilentSpeller tracks tongue movement, enabling users to type by spelling words without voicing. SilentSpeller achieves an average 97% character accuracy in offline isolated word testing on a 1164-word dictionary. Walking has little effect on accuracy; average offline character accuracy was roughly equivalent on 107 phrases entered while walking (97.5%) or seated (96.5%). To demonstrate extensibility, the system was tested on 100 unseen words, leading to an average 94% accuracy. Live text entry speeds for seven participants averaged 37 words per minute at 87% accuracy. Comparing silent spelling to current practice suggests that SilentSpeller may be a viable alternative for silent mobile text entry.

著者
Naoki Kimura
The University of Tokyo, Bunkyo, Tokyo, Japan
Tan Gemicioglu
Georgia Institute of Technology, Atlanta, Georgia, United States
Jonathan Womack
Georgia Institute of Technology, Atlanta, Georgia, United States
Yuhui Zhao
Georgia Institute of Technology, Atlanta, Georgia, United States
Richard Li
University of Washington, Seattle, Washington, United States
Abdelkareem Bedri
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Zixiong Su
The University of Tokyo, Tokyo, Japan
Alex Olwal
Google Inc., Mountain View, California, United States
Jun Rekimoto
The University of Tokyo, Tokyo, Japan
Thad Starner
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502015

動画
ReflecTouch: Detecting Grasp Posture of Smartphone Using Corneal Reflection Images
要旨

By sensing how a user is holding a smartphone, adaptive user interfaces are possible such as those that automatically switch the displayed content and position of graphical user interface (GUI) components following how the phone is being held. We propose ReflecTouch, a novel method for detecting how a smartphone is being held by capturing images of the smartphone screen reflected on the cornea with a built-in front camera. In these images, the areas where the user places their fingers on the screen appear as shadows, which makes it possible to estimate the grasp posture. Since most smartphones have a front camera, this method can be used regardless of the device model; in addition, no additional sensor or hardware is required. We conducted data collection experiments to verify the classification accuracy of the proposed method for six different grasp postures, and the accuracy was 85%.

著者
Xiang Zhang
Keio University, Yokohama City, Japan
Kaori Ikematsu
Yahoo Japan Corporation, Tokyo, Japan
Kunihiro Kato
Tokyo University of Technology, Tokyo, Japan
Yuta Sugiura
Keio University, Yokohama City, Japan
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517440

動画
FaceOri: Tracking Head Position and Orientation Using Ultrasonic Ranging on Earphones
要旨

Face orientation can often indicate users’ intended interaction target. In this paper, we propose FaceOri, a novel face tracking technique based on acoustic ranging using earphones. FaceOri can leverage the speaker on a commodity device to emit an ultrasonic chirp, which is picked up by the set of microphones on the user’s earphone, and then processed to calculate the distance from each microphone to the device. These measurements are used to derive the user’s face orientation and distance with respect to the device. We conduct a ground truth comparison and user study to evaluate FaceOri’s performance. The results show that the system can determine whether the user orients to the device at a 93.5% accuracy within a 1.5 meters range. Furthermore, FaceOri can continuously track the user’s head orientation with a median absolute error of 10.9 mm in the distance, 3.7◦ in yaw, and 5.8◦ in pitch. FaceOri can allow for convenient hands-free control of devices and produce more intelligent context-aware interaction.

著者
Yuntao Wang
Tsinghua University, Beijing, China
Jiexin Ding
Tsinghua University, Beijing, China
Ishan Chatterjee
University of Washington, Seattle, Washington, United States
Farshid Salemi Parizi
University of Washington, Seattle, Washington, United States
Yuzhou Zhuang
Tsinghua University, Beijing, China
Yukang Yan
Tsinghua University, Beijing, China
Shwetak Patel
University of Washington, Seattle, Washington, United States
Yuanchun Shi
Tsinghua University, Beijing, China
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517698

動画
More Errors vs. Longer Commands: The Effects of Repetition and Reduced Expressiveness on Input Interpretation Error, Learning, and User Preference
要旨

Many interactive systems are susceptible to misinterpreting the user's input actions or gestures. Interpretation errors are common when systems gather a series of signals from the user and then attempt to interpret the user's intention based on those signals -- e.g., gesture identification from a touchscreen, camera, or body-worn electrodes -- and previous work has shown that interpretation error can cause significant problems for learning new input commands. Error-reduction strategies from telecommunications, such as repeating a command or increasing the length of the input while reducing its expressiveness, could improve these input mechanisms -- but little is known about whether longer command sequences will cause problems for users (e.g., increased effort or reduced learning). We tested performance, learning, and perceived effort in a crowd-sourced study where participants learned and used input mechanisms with different error-reduction techniques. We found that error reduction techniques are feasible, can outperform error-prone ordinary input, and do not negatively affect learning or perceived effort.

著者
Kevin C.. Lam
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Carl Gutwin
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Madison Klarkowski
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Andy Cockburn
University of Canterbury, Christchurch, New Zealand
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502079

動画