Modalities

会議の名前
CHI 2023
Fingerhints: Understanding Users' Perceptions of and Preferences for On-Finger Kinesthetic Notifications
要旨

We present "fingerhints," on-finger kinesthetic feedback represented by hyper-extension movements of the index finger, bypassing user agency, for notifications delivery. To this end, we designed a custom-made finger-augmentation device, which leverages mechanical force to deliver fingerhints as programmable hyper-extensions of the index finger. We evaluate fingerhints with 21 participants, and report good usability, low technology creepiness, and moderate to high social acceptability. In a second study with 11 new participants, we evaluate the wearable comfort of our fingerhints device against four commercial finger- and hand-augmentation devices. Finally, we present insights from the experience of one participant, who wore our device for eight hours during their daily life. We discuss the user experience of fingerhints in relation to our participants' personality traits, finger dexterity levels, and general attitudes toward notifications, and present implications for interactive systems leveraging on-finger kinesthetic feedback for on-body computing.

著者
Adrian-Vasile Catană
Ștefan cel Mare University of Suceava, Suceava, Romania
Radu-Daniel Vatavu
Ștefan cel Mare University of Suceava, Suceava, Romania
論文URL

https://doi.org/10.1145/3544548.3581022

動画
TicTacToes: Assessing Toe Movements as an Input Modality
要旨

From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.

著者
Florian Müller
LMU Munich, Munich, Germany
Daniel Schmitt
TU Darmstadt, Darmstadt, Germany
Andrii Matviienko
Technical University of Darmstadt, Darmstadt, Germany
Dominik Schön
TU Darmstadt, Darmstadt, Germany
Sebastian Günther
Technical University of Darmstadt, Darmstadt, Germany
Thomas Kosch
HU Berlin, Berlin, Germany
Martin Schmitz
Saarland University, Saarbrücken, Germany
論文URL

https://doi.org/10.1145/3544548.3580954

動画
Phone Sleight of Hand: Finger-Based Dexterous Gestures for Physical Interaction with Mobile Phones
要旨

We identify and evaluate single-handed “dexterous gestures” to physically manipulate a phone using the fine motor skills of fingers. Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Results show rotate is rated as easiest and most comfortable, while flip is rated lowest. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. Design guidelines are developed to consider comfort, ability, and confidence when mapping dexterous gestures to interactions, and demonstrations show how such gestures can be used in smartphone applications.

著者
Yen-Ting Yeh
University of Waterloo, Waterloo, Ontario, Canada
Fabrice Matulic
Preferred Networks Inc., Tokyo, Japan
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
論文URL

https://doi.org/10.1145/3544548.3581121

動画
Characteristics of Deep and Skim Reading on Smartphones vs. Desktop: A Comparative Study
要旨

Deep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.

著者
Xiuge Chen
The University of Melbourne, Melbourne, Victoria, Australia
Namrata Srivastava
Monash University, Melbourne, Victoria, Australia
Rajiv Jain
Adobe Research, College Park, Maryland, United States
Jennifer Healey
Adobe Research, San Jose, California, United States
Tilman Dingler
University of Melbourne, Melbourne, Victoria, Australia
論文URL

https://doi.org/10.1145/3544548.3581174

動画
Dynamics of eye-hand coordination are flexibly preserved in eye-cursor coordination during an online, digital, object interaction task
要旨

Do patterns of eye-hand coordination observed during real-world object interactions apply to digital, screen-based object interactions? We adapted a real-world object interaction task (physically transferring cups in sequence about a tabletop) into a two-dimensional screen-based task (dragging-and-dropping circles in sequence with a cursor). We collected gaze (with webcam eye-tracking) and cursor position data from 51 fully-remote, crowd-sourced participants who performed the task on their own computer. We applied real-world time-series data segmentation strategies to resolve the self-paced movement sequence into phases of object interaction and rigorously cleaned the webcam eye-tracking data. In this preliminary investigation, we found that: 1) real-world eye-hand coordination patterns persist and adapt in this digital context, and 2) remote, online, cursor-tracking and webcam eye-tracking are useful tools for capturing visuomotor behaviours during this ecologically-valid human-computer interaction task. We discuss how these findings might inform design principles and further investigations into natural behaviours that persist in digital environments.

著者
Jennifer K. Bertrand
University of Alberta, Edmonton, Alberta, Canada
Craig Chapman
University of Alberta, Edmonton, Alberta, Canada
論文URL

https://doi.org/10.1145/3544548.3580866

動画