We present "fingerhints," on-finger kinesthetic feedback represented by hyper-extension movements of the index finger, bypassing user agency, for notifications delivery. To this end, we designed a custom-made finger-augmentation device, which leverages mechanical force to deliver fingerhints as programmable hyper-extensions of the index finger. We evaluate fingerhints with 21 participants, and report good usability, low technology creepiness, and moderate to high social acceptability. In a second study with 11 new participants, we evaluate the wearable comfort of our fingerhints device against four commercial finger- and hand-augmentation devices. Finally, we present insights from the experience of one participant, who wore our device for eight hours during their daily life. We discuss the user experience of fingerhints in relation to our participants' personality traits, finger dexterity levels, and general attitudes toward notifications, and present implications for interactive systems leveraging on-finger kinesthetic feedback for on-body computing.
https://doi.org/10.1145/3544548.3581022
From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.
https://doi.org/10.1145/3544548.3580954
We identify and evaluate single-handed “dexterous gestures” to physically manipulate a phone using the fine motor skills of fingers. Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Results show rotate is rated as easiest and most comfortable, while flip is rated lowest. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. Design guidelines are developed to consider comfort, ability, and confidence when mapping dexterous gestures to interactions, and demonstrations show how such gestures can be used in smartphone applications.
https://doi.org/10.1145/3544548.3581121
Deep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.
https://doi.org/10.1145/3544548.3581174
Do patterns of eye-hand coordination observed during real-world object interactions apply to digital, screen-based object interactions? We adapted a real-world object interaction task (physically transferring cups in sequence about a tabletop) into a two-dimensional screen-based task (dragging-and-dropping circles in sequence with a cursor). We collected gaze (with webcam eye-tracking) and cursor position data from 51 fully-remote, crowd-sourced participants who performed the task on their own computer. We applied real-world time-series data segmentation strategies to resolve the self-paced movement sequence into phases of object interaction and rigorously cleaned the webcam eye-tracking data. In this preliminary investigation, we found that: 1) real-world eye-hand coordination patterns persist and adapt in this digital context, and 2) remote, online, cursor-tracking and webcam eye-tracking are useful tools for capturing visuomotor behaviours during this ecologically-valid human-computer interaction task. We discuss how these findings might inform design principles and further investigations into natural behaviours that persist in digital environments.
https://doi.org/10.1145/3544548.3580866