Tap&Say: Touch Location-Informed Large Language Model for Multimodal Text Correction on Smartphones

要旨

While voice input offers a convenient alternative to traditional text editing on mobile devices, practical implementations face two key challenges: 1) reliably distinguishing between editing commands and content dictation, and 2) effortlessly pinpointing the intended edit location. We propose Tap&Say, a novel multimodal system that combines touch interactions with Large Language Models (LLMs) for accurate text correction. By tapping near an error, users signal their edit intent and location, addressing both challenges. Then, the user speaks the correction text. Tap&Say utilizes the touch location, voice input, and existing text to generate contextually relevant correction suggestions. We propose a novel touch location-informed attention layer that integrates the tap location into the LLM's attention mechanism, enabling it to utilize the tap location for text correction. We fine-tuned the touch location-informed LLM on synthetic touch locations and correction commands, achieving significantly higher correction accuracy than the state-of-the-art method VT. A 16-person user study demonstrated that Tap&Say outperforms VT with 16.4% shorter task completion time and 47.5% fewer keyboard clicks and is preferred by users.

著者
Maozheng Zhao
Stony Brook University, Stony Brook, New York, United States
Michael Xuelin Huang
Google, Mountain View, California, United States
Nathan G. Huang
Westlake High School, Austin, Texas, United States
Shanqing Cai
Google, Mountain View, California, United States
Henry Huang
Harvard University, Cambridge, Massachusetts, United States
Michael G. Huang
University of Texas at Austin, Austin, Texas, United States
Shumin Zhai
Google, Mountain View, California, United States
IV Ramakrishnan
Stony Brook University, Stony Brook, New York, United States
Xiaojun Bi
Stony Brook University, Stony Brook, New York, United States
DOI

10.1145/3706598.3713376

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713376

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Language Matters

G318+G319
6 件の発表
2025-05-01 18:00:00
2025-05-01 19:30:00
日本語まとめ
読み込み中…