Leveraging Multimodal LLM for Inspirational User Interface Search

要旨

Inspirational search, the process of exploring designs to inform and inspire new creative work, is pivotal in mobile user interface (UI) design. However, exploring the vast space of UI references remains a challenge. Existing AI-based UI search methods often miss crucial semantics like target users or the mood of apps. Additionally, these models typically require metadata like view hierarchies, limiting their practical use. We used a multimodal large language model (MLLM) to extract and interpret semantics from mobile UI images. We identified key UI semantics through a formative study and developed a semantic-based UI search system. Through computational and human evaluations, we demonstrate that our approach significantly outperforms existing UI retrieval methods, offering UI designers a more enriched and contextually relevant search experience. We enhance the understanding of mobile UI design semantics and highlight MLLMs' potential in inspirational search, providing a rich dataset of UI semantics for future studies.

著者
Seokhyeon Park
Seoul National University, Seoul, Korea, Republic of
Yumin Song
Seoul National University, Seoul, Korea, Republic of
Soohyun Lee
Seoul National University, Seoul, Korea, Republic of
Jaeyoung Kim
Seoul National University, Seoul, Korea, Republic of
Jinwook Seo
Seoul National University, Seoul, Korea, Republic of
DOI

10.1145/3706598.3714213

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714213

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Innovations in Interaction Design

Annex Hall F206
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…