Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory Augmentation

要旨

People have to remember an ever-expanding volume of information. Wearables that use information capture and retrieval for memory augmentation can help but can be disruptive and cumbersome in real-world tasks, such as in social settings. To address this, we developed Memoro, a wearable audio-based memory assistant with a concise user interface. Memoro uses a large language model (LLM) to infer the user’s memory needs in a conversational context, semantically search memories, and present minimal suggestions. The assistant has two interaction modes: Query Mode for voicing queries and Queryless Mode for on-demand predictive assistance, without explicit query. Our study of (N=20) participants engaged in a real-time conversation, demonstrated that using Memoro reduced device interaction time and increased recall confidence while preserving conversational quality. We report quantitative results and discuss the preferences and experiences of users. This work contributes towards utilizing LLMs to design wearable memory augmentation systems that are minimally disruptive.

受賞
Honorable Mention
著者
Wazeer Deen. Zulfikar
MIT Media Lab, Cambridge, Massachusetts, United States
Samantha Chan
MIT Media Lab, Cambridge, Massachusetts, United States
Pattie Maes
MIT Media Lab, Cambridge, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642450

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Health and AI C

313C
5 件の発表
2024-05-15 20:00:00
2024-05-15 21:20:00