GhostUI: Unveiling Hidden Interactions in Mobile UI

要旨

Modern mobile applications rely on hidden interactions—gestures without visual cues like long presses and swipes—to provide functionality without cluttering interfaces. While experienced users may discover these interactions through prior use or onboarding tutorials, their implicit nature makes them difficult for most users to uncover. Similarly, mobile agents—systems designed to automate tasks on mobile user interfaces, powered by vision language models (VLMs)—struggle to detect veiled interactions or determine actions for completing tasks. To address this challenge, we present GhostUI, a new dataset designed to enable the detection of hidden interactions in mobile applications. GhostUI provides before-and-after screenshots, simplified view hierarchies, gesture metadata, and task descriptions, allowing VLMs to better recognize concealed gestures and anticipate post-interaction states. Quantitative evaluations with VLMs show that models fine-tuned on GhostUI outperform baseline VLMs, particularly in predicting hidden interactions and inferring post-interaction screens, underscoring GhostUI's potential as a foundation for advancing mobile task automation.

著者
Minkyu Kweon
Seoul National University, Seoul, Korea, Republic of
Seokhyeon Park
Seoul National University, Seoul, Korea, Republic of
Soohyun Lee
Seoul National University, Seoul, Korea, Republic of
You Been Lee
Seoul National University, Seoul, Korea, Republic of
Jeongmin Rhee
Seoul National University, Seoul, Korea, Republic of
Jinwook Seo
Seoul National University, Seoul, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Human Steering and Interaction with AI

P1 - Room 111
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00