Note-taking is critical during speeches and discussions, serving for later summarization and organization and for real-time question and opinion reminding in question-and-answer sessions or timely contributions in discussions. Manually typing on smartphones for note-taking could be distracting and increase cognitive load. While LLMs are used to automatically generate summaries and highlights, the content generated by AI may not match users’ intentions without user input. Therefore, we propose an AI-copiloted AR system, GazeNoter, to allow users to swiftly select diverse LLM-generated suggestions via gaze on an AR headset for real-time note-taking. GazeNoter leverages an AR headset as a medium for users to swiftly adjust the LLM output to match their intentions, forming a user-in-the-loop AI system for both within-context and beyond-context notes. We conducted two studies to verify the usability of GazeNoter in attending speeches in a static sitting condition and walking meetings and discussions in a mobile walking condition, respectively.
https://dl.acm.org/doi/10.1145/3706598.3714294
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)