Towards Complete Icon Labeling in Mobile Applications

要旨

Accurately recognizing icon types in mobile applications is integral to many tasks, including accessibility improvement, UI design search, and conversational agents. Existing research focuses on recognizing the most frequent icon types, but these technologies fail when encountering an unrecognized low-frequency icon. In this paper, we work towards complete coverage of icons in the wild. After annotating a large-scale icon dataset (327,879 icons) from iPhone apps, we found a highly uneven distribution: 98 common icon types covered 92.8% of icons, while 7.2% of icons were covered by more than 331 long-tail icon types. In order to label icons with widely varying occurrences in apps, our system uses an image classification model to recognize common icon types with an average of 3,000 examples each (96.3% accuracy) and applies a few-shot learning model to classify long-tail icon types with an average of 67 examples each (78.6% accuracy). Our system also detects contextual information that helps characterize icon semantics, including nearby text (95.3% accuracy) and modifier symbols added to the icon (87.4% accuracy). In a validation study with workers (n=23), we verified the usefulness of our generated icon labels. The icon types supported by our work cover 99.5% of collected icons, improving on the previously highest 78% coverage in icon classification work.

著者
Jieshan Chen
Australian National University, Canberra, Australia
Amanda Swearngin
Apple, Seattle, Washington, United States
Jason Wu
Apple, Pittsburgh, Pennsylvania, United States
Titus Barik
Apple, Seattle, Washington, United States
Jeffrey Nichols
Apple Inc, San Diego, California, United States
Xiaoyi Zhang
Apple Inc, Seattle, Washington, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502073

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Natural Language

286–287
5 件の発表
2022-05-04 23:15:00
2022-05-05 00:30:00