Temporal target selection requires users to wait and trigger the selection input within a bounded time window, with a selection cursor that is expected to be delayed. This task conceptualizes, for example, a variety of game scenarios such as determining the timing of shooting a projectile towards a moving object. In this work, we explore models that predict "when'' users typically perform a selection (i.e., user selection distribution) and their selection error rates in such tasks. We hypothesize that users react to temporal factors including "distance'', "width'', and "delay'' as how they treat the corresponding variables in spatial target selection. The derived models are evaluated in a controlled experiment and an MTurk-based online study. Our research contributes new knowledge on user behavior in temporal target selection tasks and its potential connection with its spatial correspondence. Our models and conclusions can benefit both users and designers of relevant interactive applications.
Fitts' law is a behavioral model, used to design protocols and analyze data from pointing experiments. These are usually conducted in HCI to evaluate input performance.
We recently proposed an alternative method to characterize input performance, called the method of PVPs in 1D, based on 1) a dual-minimization protocol, and 2) an analysis of the variability of entire trajectories.
We extend the method in 2D; our contributions include new metrics, a new protocol, and a Python library. We also present the results of a controlled experiment where the new method is validated using three devices (mouse, touchpad, controller): effect sizes in the 2D case replicate those previously found. We also propose a comparison between Fitts’ law and our novel evaluation: the method of PVPs provides more information than Fitts’ law, and can predict its parameters. We discuss how this new method may relieve open problems of Fitts’ law.
We examine gestures performed with a class of input devices with distinctive quality properties in the wearables landscape, which we call "index-Finger Augmentation Devices" (iFADs). We introduce a four-level taxonomy to characterize the diversity of iFAD gestures, evaluate iFAD gesture articulation on a dataset of 6,369 gestures collected from 20 participants, and compute recognition accuracy rates. Our findings show that iFAD gestures are fast (1.84s on average), easy to articulate (1.52 average rating on a difficulty scale from 1 to 5), and socially acceptable (81% willingness to use them in public places). We compare iFAD gestures with gestures performed using other devices (styli, touchscreens, game controllers) from several public datasets (39,263 gestures, 277 participants), and report that iFAD gestures are two times faster than whole-body gestures and as fast as stylus and finger strokes performed on touchscreens.
Textile surfaces, such as on sofas, cushions, and clothes, offer promising alternative locations to place controls for digital devices. Textiles are a natural, even abundant part of living spaces, and support unobtrusive input. While there is solid work on technical implementations of textile interfaces, there is little guidance regarding their design—especially their haptic cues, which are essential for eyes-free use. In particular, icons easily communicate information visually in a compact fashion, but it is unclear how to adapt them to the haptics-centric textile interface experience. Therefore, we investigated the recognizability of 84 haptic icons on fabrics. Each combines a shape, height profile (raised, recessed, or flat), and affected area (filled or outline). Our participants clearly preferred raised icons, and identified them with the highest accuracy and at competitive speeds. We also provide insights into icons that look very different, but are hard to distinguish via touch alone.
Optical see-through head-mounted displays (OHMDs) can provide just-in-time digital assistance to users while they are engaged in ongoing tasks. However, given users' limited attentional resources when multitasking, there is a need to concisely and accurately present information in OHMDs. Existing approaches for digital information presentation involve using either text or pictograms. While pictograms have enabled rapid recognition and easier use in warning messages and traffic signs, most studies using pictograms for digital notifications have exhibited unfavorable results. We thus conducted a series of four iterative studies to understand how we can support effective notification presentation on OHMDs during multitasking scenarios. We find that while icon-augmented notifications can outperform text-only notifications, their effectiveness depends on icon familiarity, encoding density, and environmental brightness. We reveal design implications when using icon-augmented notifications in OHMDs and present plausible reasons for the observed disparity in literature.
Error rates (ERs) in target-pointing tasks are typically modelled in two steps: predicting the click-point variability (sigma) based on target sizes and then computing the probability that a click falls outside a target. This is an indirect approach if the researcher's purpose is to achieve the accurate prediction of ERs because the model coefficients are optimized to predict sigma accurately in the first step. We compared the prediction accuracies of this method with a more direct technique in which the coefficients used for sigma are determined in such a way as to optimize the closeness between observed and predicted ERs. Our re-analysis of eight datasets from mouse- and touch-based pointing studies showed that the latter approach consistently outperforms the conventional one if the starting values for the parameter search are appropriate (which can be achieved by hyperparameter optimization), thus enabling the interface configuration on the basis of accurately predicted ERs.