UI designers often correct false affordances and improve the discoverability of features when users have trouble determining if elements are tappable. We contribute a novel system that models the perceived tappability of mobile UI elements with a vision-based deep neural network and helps provide design insights with dataset-level and instance-level explanations of model predictions. Our system retrieves designs from similar mobile UI examples from our dataset using the latent space of our model. We also contribute a novel use of an interpretability algorithm, XRAI, to generate a heatmap of UI elements that contribute to a given tappability prediction. Through several examples, we show how our system can help automate elements of UI usability analysis and provide insights for designers to iterate their designs. In addition, we share findings from an exploratory evaluation with professional designers to learn how AI-based tools can aid UI design and evaluation for tappability issues.
https://dl.acm.org/doi/abs/10.1145/3491102.3517497
The Bayesian information gain (BIG) framework has garnered significant interest as an interaction method for predicting a user’s intended target based on a user's input. However, the BIG framework is constrained to goal-oriented cases, which renders it difficult to support changing goal-oriented cases such as design exploration. During the design exploration process, the design direction is often undefined and may vary over time. The designer’s mental model specifying the design direction is sequentially updated through the information-retrieval process. Therefore, tracking the change point of a user’s goal is crucial for supporting an information exploration. We introduce the BIGexplore framework for changing goal-oriented cases. BIGexplore detects transitions in a user’s browsing behavior as well as the user’s next target. Furthermore, a user study on BIGexplore confirms that the computational cost is significantly reduced compared with the existing BIG framework, and it plausibly detects the point where the user changes goals.
https://dl.acm.org/doi/abs/10.1145/3491102.3517729
The simulation of user behavior with deep reinforcement learning agents has shown some recent success. However, the inverse problem, that is, inferring the free parameters of the simulator from observed user behaviors, remains challenging to solve. This is because the optimization of the new action policy of the simulated agent, which is required whenever the model parameters change, is computationally impractical. In this study, we introduce a network modulation technique that can obtain a generalized policy that immediately adapts to the given model parameters. Further, we demonstrate that the proposed technique improves the efficiency of user simulator-based inference by eliminating the need to obtain an action policy for novel model parameters. We validated our approach using the latest user simulator for point-and-click behavior. Consequently, we succeeded in inferring the user’s cognitive parameters and intrinsic reward settings with less than 1/1000 computational power to those of existing methods.
https://dl.acm.org/doi/abs/10.1145/3491102.3502023
Reach redirection is an illusion-based virtual reality (VR) interaction technique where a user’s virtual hand is shifted during a reach in order to guide their real hand to a physical location. Prior works have not considered the underlying sensorimotor processes driving redirection. In this work, we propose adapting a sensorimotor model for goal-directed reach to obtain a model for visually-redirected reach, specifically by incorporating redirection as a sensory bias in the state estimate used by a minimum jerk motion controller. We validate and then leverage this model to develop a Model Predictive Control (MPC) approach for reach redirection, enabling the real-time generation of spatial warping according to desired optimization criteria (e.g., redirection goals) and constraints (e.g., sensory thresholds). We illustrate this approach with two example criteria -- redirection to a desired point and redirection along a desired path -- and compare our approach against existing techniques in a user evaluation.
https://dl.acm.org/doi/abs/10.1145/3491102.3501907
When giving input with a button, users follow one of two strategies: (1) react to the output from the computer or (2) proactively act in anticipation of the output from the computer. We propose a technique to quantify reactiveness and proactiveness to determine the degree and characteristics of each input strategy. The technique proposed in this study uses only screen recordings and does not require instrumentation beyond the input logs. The likelihood distribution of the time interval between the button inputs and system outputs, which is uniquely determined for each input strategy, is modeled. Then the probability that each observed input/output pair originates from a specific strategy is estimated along with the parameters of the corresponding likelihood distribution. In two empirical studies, we show how to use the technique to answer questions such as how to design animated transitions and how to predict a player's score in real-time games.
https://dl.acm.org/doi/abs/10.1145/3491102.3501913