OmniActions: Predicting Digital Actions in Response to Real-World Multimodal Sensory Inputs with LLMs

要旨

The progression to "Pervasive Augmented Reality" envisions easy access to multimodal information continuously. However, in many everyday scenarios, users are occupied physically, cognitively or socially. This may increase the friction to act upon the multimodal information that users encounter in the world. To reduce such friction, future interactive interfaces should intelligently provide quick access to digital actions based on users' context. To explore the range of possible digital actions, we conducted a diary study that required participants to capture and share the media that they intended to perform actions on (e.g., images or audio), along with their desired actions and other contextual information. Using this data, we generated a holistic design space of digital follow-up actions that could be performed in response to different types of multimodal sensory inputs. We then designed \codename, a pipeline powered by large language models (LLMs) that processes multimodal sensory inputs and predicts follow-up actions on the target information grounded in the derived design space. Using the empirical data collected in the diary study, we performed quantitative evaluations on three variations of LLM techniques (intent classification, in-context learning and finetuning) and identified the most effective technique for our task. Additionally, as an instantiation of the pipeline, we developed an interactive prototype and reported preliminary user feedback about how people perceive and react to the action predictions and its errors.

著者
Jiahao Nick. Li
UCLA, Los Angeles, California, United States
Yan Xu
Meta, Redmond, Washington, United States
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Stephanie Santosa
Facebook Reality Labs, Toronto, Ontario, Canada
Michelle Li
Reality Labs Research, Redmond, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642068

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: AI and UI Design

316A
5 件の発表
2024-05-14 23:00:00
2024-05-15 00:20:00