Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis

要旨

UI designers often correct false affordances and improve the discoverability of features when users have trouble determining if elements are tappable. We contribute a novel system that models the perceived tappability of mobile UI elements with a vision-based deep neural network and helps provide design insights with dataset-level and instance-level explanations of model predictions. Our system retrieves designs from similar mobile UI examples from our dataset using the latent space of our model. We also contribute a novel use of an interpretability algorithm, XRAI, to generate a heatmap of UI elements that contribute to a given tappability prediction. Through several examples, we show how our system can help automate elements of UI usability analysis and provide insights for designers to iterate their designs. In addition, we share findings from an exploratory evaluation with professional designers to learn how AI-based tools can aid UI design and evaluation for tappability issues.

著者
Eldon Schoop
University of California, Berkeley, Berkeley, California, United States
Xin Zhou
Google Research, Mountain View, California, United States
Gang Li
Google Research, Mountain View, California, United States
Zhourong Chen
Google Research, Mountain View, California, United States
Bjoern Hartmann
UC Berkeley, Berkeley, California, United States
Yang Li
Google Research, Mountain View, California, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517497

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Predictive Modelling and Simulating Users

291
5 件の発表
2022-05-03 01:15:00
2022-05-03 02:30:00