Exploring Mobile Touch Interaction with Large Language Models

要旨

Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface. In this paper, we propose to control the LLM via touch gestures performed directly on the text. We first chart a design space that covers fundamental touch input and text transformations. In this space, we then concretely explore two control mappings: spread-to-generate and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a user study (N=14) that compares three feedback designs: no visualisation, text length indicator, and length + word indicator. The results demonstrate that touch-based control of LLMs is both feasible and user-friendly, with the length + word indicator proving most effective for managing text generation. This work lays the foundation for further research into gesture-based interaction with LLMs on touch devices.

著者
Tim Zindulka
University of Bayreuth, Bayreuth, Germany
Jannek Maximilian. Sekowski
University of Bayreuth, Bayreuth, Germany
Florian Lehmann
University of Bayreuth, Bayreuth, Germany
Daniel Buschek
University of Bayreuth, Bayreuth, Germany
DOI

10.1145/3706598.3713554

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713554

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Mobile Input

G416+G417
7 件の発表
2025-05-01 18:00:00
2025-05-01 19:30:00
日本語まとめ
読み込み中…