Screen2Vec: Semantic Embedding of GUI Screens and GUI Components

要旨

Representing the semantics of GUI screens and components is crucial to data-driven computational methods for modeling user-GUI interactions and mining GUI designs. Existing GUI semantic representations are limited to encoding either the textual content, the visual design and layout patterns, or the app contexts. Many representation techniques also require significant manual data annotation efforts. This paper presents Screen2Vec, a new self-supervised technique for generating representations in embedding vectors of GUI screens and components that encode all of the above GUI features without requiring manual annotation using the context of user interaction traces. Screen2Vec is inspired by the word embedding method Word2Vec, but uses a new two-layer pipeline informed by the structure of GUIs and interaction traces and incorporates screen- and app-specific metadata. Through several sample downstream tasks, we demonstrate Screen2Vec's key useful properties: representing between-screen similarity through nearest neighbors, composability, and capability to represent user tasks.

受賞
Honorable Mention
著者
Toby Jia-Jun. Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Lindsay Popowski
Harvey Mudd College, Claremont, California, United States
Tom Mitchell
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Brad A. Myers
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445049

論文URL

https://doi.org/10.1145/3411764.3445049

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Computational Design

[A] Paper Room 02, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [B] Paper Room 02, 2021-05-13 01:00:00~2021-05-13 03:00:00 / [C] Paper Room 02, 2021-05-13 09:00:00~2021-05-13 11:00:00
Paper Room 02
15 件の発表
2021-05-12 17:00:00
2021-05-12 19:00:00
日本語まとめ
読み込み中…