Modeling the endpoint uncertainty of moving target selection with crossing is essential to understand factors such as speed-accuracy trade-off and interaction efficiency in crossing-based user interfaces with dynamic contents. However, there have been few studies looking into this research topic in the HCI field. This paper presents a Quaternary-Gaussian model to quantitatively measure the endpoint uncertainty in crossing-based moving target selection. To validate this model, we conducted an experiment with discrete crossing tasks on five factors, i.e., initial distance, size, speed, orientation, and moving direction. Results showed that our model fit the data of ? and ? accurately with adjusted R2 of 0.883 and 0.920. We also demonstrated the validity of our model in predicting error rates in crossing-based moving target selection. We concluded with a set of implications for future designs.
Hick's law is a key quantitative law in Psychology that relates reaction time to the logarithm of the number of stimulus-response alternatives in a task. Its application to HCI is controversial: Some believe that the law does not apply to HCI tasks, others regard it as the cornerstone of interface design. The law, however, is often misunderstood. We review the choice-reaction time literature and argue that: (1) Hick's law speaks against, not for, the popular principle that 'less is better'; (2) logarithmic growth of observed temporal data is not necessarily interpretable in terms of Hick's law; (3) the stimulus-response paradigm is rarely relevant to HCI tasks, where choice-reaction time can often be assumed to be constant; and (4) for user interface design, a detailed examination of the effects on choice-reaction time of psychological processes such as visual search and decision making is more fruitful than a mere reference to Hick's law.
Interpretable machine learning models trade -off accuracy for simplicity to make explanations more readable and easier to comprehend. Drawing from cognitive psychology theories in graph comprehension, we formalize readability as visual cognitive chunks to measure and moderate the cognitive load in explanation visualizations. We present Cognitive-GAM (COGAM) to generate explanations with desired cognitive load and accuracy by combining the expressive nonlinear generalized additive models (GAM) with simpler sparse linear models. We calibrated visual cognitive chunks with reading time in a user study, characterized the trade-off between cognitive load and accuracy for four datasets in simulation studies, and evaluated COGAM against baselines with users. We found that COGAM can decrease cognitive load without decreasing accuracy and/or increase accuracy without increasing cognitive load. Our framework and empirical measurement instruments for cognitive load will enable more rigorous assessment of the human interpretability of explainable AI.
https://doi.org/10.1145/3313831.3376615
Quantitative persona creation (QPC) has tremendous potential, as HCI researchers and practitioners can leverage user data from online analytics and digital media platforms to better understand their users and customers. However, there is a lack of a systematic overview of the QPC methods and progress made, with no standard methodology or known best practices. To address this gap, we review 49 QPC research articles from 2005 to 2019. Results indicate three stages of QPC research: Emergence, Diversification, and Sophistication. Sharing resources, such as datasets, code, and algorithms, is crucial to achieving the next stage (Maturity). For practitioners, we provide guiding questions for assessing QPC readiness in organizations.
Animation, a common design element in user interfaces (UI), can impact user engagement (UE) with mobile applications. To avoid impairing UE due to improper design of animation, designers rely on resource-intensive evaluation methods like user studies or expert reviews. To alleviate this burden, we propose a data-driven approach to assisting designers in examining UE issues with their animation designs. We first crowdsource UE assessments of mobile UI animations. Based on the collected data, we then build a novel deep learning model that captures both spatial and temporal features of animations to predict their UE levels. Evaluations show that our model achieves a reasonable accuracy. We further leverage the animation feature encoded by our model and a sample set of expert reviews to derive potential UE issues of a particular animation. Finally, we develop a proof-of-concept tool and evaluate its potential usage in actual design practices with experts