Dynamic Motor Skill Synthesis with Human-Machine Mutual Actuation
説明

This paper presents an approach for coupling robotic capability with human ability in dynamic motor skills, called "Human-Machine Mutual Actuation (HMMA)." We focus specifically on throwing motions and propose a method to control the release timing computationally. A system we developed achieves our concept, HMMA, by a robotic handheld device that acts as a release controller. We conducted user studies to validate the feasibility of the concept and clarify related technical issues to be tackled. We recognized that the system successfully performs on throwing according to the target while it exploits human ability. These empirical experiments suggest that robotic capability can be embedded into the users' motions without losing their senses of control. Throughout the user study, we also revealed several issues to be tackled in further research contributing to HMMA.

日本語まとめ
読み込み中…
読み込み中…
Reading with the Tongue: Individual Differences Affect the Perception of Ambiguous Stimuli with the BrainPort
説明

There is an increasing interest in non-visual interfaces for HCI to take advantage of the information processing capability of the other sensory modalities. The BrainPort is a vision-to-tactile sensory substitution device that conveys information through electro-stimulation on the tongue. As the tongue is a horizontal surface, it makes for an interesting platform to study the brain's representation of space. But which way is up on the tongue? We provided participants with perceptually ambiguous stimuli and measured how often different perspectives were adopted; furthermore, whether camera orientation and gender had an effect. Additionally, we examined whether personality (trait extraversion and openness) could predict the perspective taken. We found that self-centered perspectives were predominantly adopted, and that trait openness may predict perspective. This research demonstrates how individual differences can affect the usability of sensory substitution devices, and highlights the need for flexible and customisable interfaces.

日本語まとめ
読み込み中…
読み込み中…
How We Type: Eye and Finger Movement Strategies in Mobile Typing
説明

Relatively little is known about eye and finger movement in typing with mobile devices. Most prior studies of mobile typing rely on log data, while data on finger and eye movements in typing come from studies with physical keyboards. This paper presents new findings from a transcription task with mobile touchscreen devices. Movement strategies were found to emerge in response to sharing of visual attention: attention is needed for guiding finger movements and detecting typing errors. In contrast to typing on physical keyboards, visual attention is kept mostly on the virtual keyboard, and glances at the text display are associated with performance. When typing with two fingers, although users make more errors, they manage to detect and correct them more quickly. This explains part of the known superiority of two-thumb typing over one-finger typing. We release the extensive dataset on everyday typing on smartphones.

日本語まとめ
読み込み中…
読み込み中…
The Low/High Index of Pupillary Activity
説明

A novel eye-tracked measure of pupil diameter oscillation is derived as an indicator of cognitive load. The new metric, termed the Low/High Index of Pupillary Activity (LHIPA), is able to discriminate cognitive load (vis-a-vis task difficulty) in several experiments where the Index of Pupillary Activity fails to do so. Rationale for the LHIPA is tied to the functioning of the human autonomic nervous system yielding a hybrid measure based on the ratio of Low/High frequencies of pupil oscillation. The paper's contribution is twofold. First, full documentation is provided for the calculation of the LHIPA. As with the IPA, it is possible for researchers to apply this metric to their own experiments where a measure of cognitive load is of interest. Second, robustness of the LHIPA is shown in analysis of three experiments, a restrictive fixed-gaze number counting task, a less restrictive fixed-gaze n-back task, and an applied eye-typing task.

日本語まとめ
読み込み中…
読み込み中…
Robustness of Eye Movement Biometrics Against Varying Stimuli and Varying Trajectory Length
説明

Recent results suggest that biometric identification based on human's eye movement characteristics can be used for authentication. In this paper, we present three new methods and benchmark them against the state-of-the-art. The best of our new methods improves the state-of-the-art performance by 5.2 percentage points. Furthermore, we investigate some of the factors that affect the robustness of the recognition rate of different classifiers on gaze trajectories, such as the type of stimulus and the tracking trajectory length. We find that the state-of-the-art method only works well when using the same stimulus for testing that was used for training. By contrast, our novel method more than doubles the identification accuracy for these transfer cases. Furthermore, we find that with only 90 seconds of eye tracking data, 86.7% accuracy can be achieved.

日本語まとめ
読み込み中…
読み込み中…