Eye, tongue & muscle

Paper session

会議の名前
CHI 2020
Dynamic Motor Skill Synthesis with Human-Machine Mutual Actuation
要旨

This paper presents an approach for coupling robotic capability with human ability in dynamic motor skills, called "Human-Machine Mutual Actuation (HMMA)." We focus specifically on throwing motions and propose a method to control the release timing computationally. A system we developed achieves our concept, HMMA, by a robotic handheld device that acts as a release controller. We conducted user studies to validate the feasibility of the concept and clarify related technical issues to be tackled. We recognized that the system successfully performs on throwing according to the target while it exploits human ability. These empirical experiments suggest that robotic capability can be embedded into the users' motions without losing their senses of control. Throughout the user study, we also revealed several issues to be tackled in further research contributing to HMMA.

受賞
Honorable Mention
キーワード
Robotic device
Motor skill
Motion sensing
Human augmentation
Human-machine mutual actuation
著者
Azumi Maekawa
The University of Tokyo, Tokyo, Japan
Seito Matsubara
The University of Tokyo, Tokyo, Japan
Sohei Wakisaka
The University of Tokyo, Tokyo, Japan
Daisuke Uriu
The University of Tokyo, Tokyo, Japan
Atsushi Hiyama
The University of Tokyo, Tokyo, Japan
Masahiko Inami
The University of Tokyo, Tokyo, Japan
DOI

10.1145/3313831.3376705

論文URL

https://doi.org/10.1145/3313831.3376705

動画
Reading with the Tongue: Individual Differences Affect the Perception of Ambiguous Stimuli with the BrainPort
要旨

There is an increasing interest in non-visual interfaces for HCI to take advantage of the information processing capability of the other sensory modalities. The BrainPort is a vision-to-tactile sensory substitution device that conveys information through electro-stimulation on the tongue. As the tongue is a horizontal surface, it makes for an interesting platform to study the brain's representation of space. But which way is up on the tongue? We provided participants with perceptually ambiguous stimuli and measured how often different perspectives were adopted; furthermore, whether camera orientation and gender had an effect. Additionally, we examined whether personality (trait extraversion and openness) could predict the perspective taken. We found that self-centered perspectives were predominantly adopted, and that trait openness may predict perspective. This research demonstrates how individual differences can affect the usability of sensory substitution devices, and highlights the need for flexible and customisable interfaces.

キーワード
Sensory substitution
tactile interfaces
individual differences in computing
user preferences
著者
Mike L. Richardson
University of Bath, Bath, United Kingdom
Tayfun Lloyd-Esenkaya
University of Bath, Bath, United Kingdom
Karin Petrini
University of Bath, Bath, United Kingdom
Michael J. Proulx
University of Bath, Bath, United Kingdom
DOI

10.1145/3313831.3376184

論文URL

https://doi.org/10.1145/3313831.3376184

How We Type: Eye and Finger Movement Strategies in Mobile Typing
要旨

Relatively little is known about eye and finger movement in typing with mobile devices. Most prior studies of mobile typing rely on log data, while data on finger and eye movements in typing come from studies with physical keyboards. This paper presents new findings from a transcription task with mobile touchscreen devices. Movement strategies were found to emerge in response to sharing of visual attention: attention is needed for guiding finger movements and detecting typing errors. In contrast to typing on physical keyboards, visual attention is kept mostly on the virtual keyboard, and glances at the text display are associated with performance. When typing with two fingers, although users make more errors, they manage to detect and correct them more quickly. This explains part of the known superiority of two-thumb typing over one-finger typing. We release the extensive dataset on everyday typing on smartphones.

キーワード
text input
mobile device
eye-hand coordination
eye movement
finger movement
著者
Xinhui Jiang
Kochi University of Technology, Kami, Japan
Yang Li
Kochi University of Technology, Kami, Japan
Jussi P.P. Jokinen
Aalto University, Helsinki, Finland
Viet Ba Hirvola
Aalto University, Helsinki, Finland
Antti Oulasvirta
Aalto University; Finnish Center for Artificial Intelligence, Helsinki, Finland
Xiangshi Ren
Kochi University of Technology, Kami, Japan
DOI

10.1145/3313831.3376711

論文URL

https://doi.org/10.1145/3313831.3376711

動画
The Low/High Index of Pupillary Activity
要旨

A novel eye-tracked measure of pupil diameter oscillation is derived as an indicator of cognitive load. The new metric, termed the Low/High Index of Pupillary Activity (LHIPA), is able to discriminate cognitive load (vis-a-vis task difficulty) in several experiments where the Index of Pupillary Activity fails to do so. Rationale for the LHIPA is tied to the functioning of the human autonomic nervous system yielding a hybrid measure based on the ratio of Low/High frequencies of pupil oscillation. The paper's contribution is twofold. First, full documentation is provided for the calculation of the LHIPA. As with the IPA, it is possible for researchers to apply this metric to their own experiments where a measure of cognitive load is of interest. Second, robustness of the LHIPA is shown in analysis of three experiments, a restrictive fixed-gaze number counting task, a less restrictive fixed-gaze n-back task, and an applied eye-typing task.

キーワード
pupillometry
eye tracking
task difficulty
著者
Andrew T. Duchowski
Clemson University, Clemson, SC, USA
Krzysztof Krejtz
SWPS University of Social Sciences and Humanities, Warsaw, Poland
Nina A. Gehrer
University of Tübingen, Tübingen, Germany
Tanya Bafna
Technical University of Denmark, Copenhagen, Denmark
Per Bækgaard
Technical University of Denmark, Kgs. Lyngby, Denmark
DOI

10.1145/3313831.3376394

論文URL

https://doi.org/10.1145/3313831.3376394

Robustness of Eye Movement Biometrics Against Varying Stimuli and Varying Trajectory Length
要旨

Recent results suggest that biometric identification based on human's eye movement characteristics can be used for authentication. In this paper, we present three new methods and benchmark them against the state-of-the-art. The best of our new methods improves the state-of-the-art performance by 5.2 percentage points. Furthermore, we investigate some of the factors that affect the robustness of the recognition rate of different classifiers on gaze trajectories, such as the type of stimulus and the tracking trajectory length. We find that the state-of-the-art method only works well when using the same stimulus for testing that was used for training. By contrast, our novel method more than doubles the identification accuracy for these transfer cases. Furthermore, we find that with only 90 seconds of eye tracking data, 86.7% accuracy can be achieved.

キーワード
eye tracking
gaze detection
eye movement biometrics
著者
Christoph Schröder
University of Bremen, Bremen, Germany
Sahar Mahdie Klim Al Zaidawi
University of Bremen, Bremen, Germany
Martin H.U. Prinzler
University of Bremen, Bremen, Germany
Sebastian Maneth
University of Bremen, Bremen, Germany
Gabriel Zachmann
University of Bremen, Bremen, Germany
DOI

10.1145/3313831.3376534

論文URL

https://doi.org/10.1145/3313831.3376534