Human-AI Interaction & GenAI

会議の名前
CHI 2026
Exploring and Probing the Algorithmic Gaze on Bodies and Well-being
要旨

Machine Learning (ML) models are increasingly applied to wearable self-tracking technologies to offer daily classifications and recommendations for well-being. This shift introduces design challenges, particularly regarding the opacity of training processes and model outputs. We contribute to this space with a conceptual framing of the algorithmic gaze on body and well-being, which we use to critically investigate long-term engagement with a wearable self-tracking technology. Through an autoethnographic study with the Oura Ring, we identified three themes, highlighting tensions between wearer and the ML models, namely: Conflicting narratives of daily activities, fine-tuning of the human, and blurry boundaries of multiple bodies using such devices simultaneously. Departing from the themes, we used fabulation as a method to craft narratives that probe the tensions from the algorithmic gaze, from which we offer alternative design openings for ML in wearable self-tracking devices.

著者
Louie Søs. Meyer
IT University of Copenhagen, Copenhagen, Denmark
Vasiliki Tsaknaki
IT University of Copenhagen, Copenhagen, Denmark
Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing
要旨

What if text could be sculpted and refined like clay, or cultivated and pruned like a plant? Texterial reimagines text as a material that users can grow, sculpt, and transform. Current generative-AI models enable rich text operations, yet rigid, linear interfaces often mask such capabilities. We explore how the text-as-material metaphor can reveal AI-enabled operations, reshape the writing process, and foster compelling user experiences. A formative study shows that users readily reason with text-as-material, informing a conceptual framework that explains how material metaphors shift mental models and bridge gulfs of envisioning, execution, and evaluation in LLM-mediated writing. We present the design and evaluation of two technical probes: Text as Clay, where users refine text through gestural sculpting, and Text as Plants, where ideas grow serendipitously over time. This work expands the design space of writing tools by treating text as a living, malleable medium.

受賞
Honorable Mention
著者
Jocelyn J. Shen
Microsoft Research, Redmond, Washington, United States
Nicolai Marquardt
Microsoft Research, Redmond, Washington, United States
Hugo Romat
Microsoft, Seattle, Washington, United States
Ken Hinckley
Microsoft Research (Emeritus), Redmond, Washington, United States
Nathalie Riche
Microsoft Research, Redmond, Washington, United States
Fanny Chevalier
Microsoft Research, Redmond, Washington, United States
AnnotateGPT: Designing Human–AI Collaboration in Pen-Based Document Annotation
要旨

Providing high-quality feedback on writing is cognitively demanding, requiring reviewers to identify issues, suggest fixes, and ensure consistency. We introduce AnnotateGPT, a system that uses pen-based annotations as an input modality for AI agents to assist with essay feedback. AnnotateGPT enhances feedback by interpreting handwritten annotations and extending them throughout the document. One AI agent classifies the purpose of each annotation, which is confirmed or corrected by the user. A second AI agent uses the confirmed purpose to generate contextually relevant feedback for other parts of the essay. In a study with 12 novice teachers annotating essays, we compared AnnotateGPT with a baseline pen-based tool without AI support. Our findings demonstrate how reviewers used annotations to regulate AI feedback generation, refine AI suggestions, and incorporate AI-generated feedback into their review process. We highlight design implications for AI-augmented feedback systems, including balanced human-AI collaboration and using pen annotations as subtle interaction.

著者
Benedict Leung
Ontario Tech University, Oshawa, Ontario, Canada
Mariana Shimabukuro
Ontario Tech University, Oshawa, Ontario, Canada
Christopher Collins
Ontario Tech University, Oshawa, Ontario, Canada
動画
PointAloud: An Interaction Suite for AI-Supported Pointer-Centric Think-Aloud Computing
要旨

Think-Aloud Computing, a method for capturing users’ verbalized thoughts during software tasks, allows eliciting rich contextual insights into evolving intentions, struggles, and decision-making processes of users in real-time. However, existing approaches face practical challenges: users often lack awareness of what is captured by the system, are not effectively encouraged to speak, and miss or are interrupted by system feedback. Additionally, thinking aloud should feel worthwhile for users due to the gained contextual AI assistance. To better support and harness Think-Aloud Computing, we introduce PointAloud, a suite of novel AI-driven pointer-centric interactions for in-the-moment verbalization encouragement, low-distraction system feedback, and contextually rich work process documentation alongside proactive AI assistance. Our user study with 12 participants provides insights into the value of pointer-centric think-aloud computing for work process documentation and human-AI co-creation. We conclude by discussing the broader implications of our findings and design considerations for pointer-centric and AI-supported Think-Aloud Computing workflows.

著者
Frederic Gmeiner
Autodesk Research, Toronto, Ontario, Canada
John R. Thompson
Autodesk Research, Atlanta, Georgia, United States
George Fitzmaurice
Autodesk Research, Toronto, Ontario, Canada
Justin Matejka
Autodesk Research, Toronto, Ontario, Canada
Gesturing Toward Abstraction: Multimodal Convention Formation in Collaborative Physical Tasks
要旨

A quintessential feature of human intelligence is the ability to create ad hoc conventions over time to achieve shared goals efficiently. We investigate how communication strategies evolve through repeated collaboration as people coordinate on shared procedural abstractions. To this end, we conducted an online unimodal study (n = 98) using natural language to probe abstraction hierarchies. In a follow-up lab study (n = 40), we examined how multimodal communication (speech and gestures) changed during physical collaboration. Pairs used augmented reality to isolate their partner’s hand and voice; one participant viewed a 3D virtual tower and sent instructions to the other, who built the physical tower. Participants became faster and more accurate by establishing linguistic and gestural abstractions and using cross-modal redundancy to emphasize key changes from previous interactions. Based on these findings, we extend probabilistic models of convention formation to multimodal settings, capturing shifts in modality preferences. Our findings and model provide building blocks for designing convention-aware intelligent agents situated in the physical world.

著者
Kiyosu Maeda
Princeton University, Princeton, New Jersey, United States
William P. McCarthy
University of California, San Diego, La Jolla, California, United States
Ching-Yi Tsai
Princeton University, Princeton, New Jersey, United States
Jeffrey Mu
Brown University, Providence, Rhode Island, United States
Haoliang Wang
MIT, Cambridge, Massachusetts, United States
Robert Hawkins
Stanford University, Stanford, California, United States
Judith E.. Fan
Stanford University, Stanford, California, United States
Parastoo Abtahi
Princeton University, Princeton, New Jersey, United States
Notational Animating: An Interactive Approach to Creating and Editing Animation Keyframes
要旨

We introduce the concept of notational animating, an interaction paradigm for animation authoring where users sketch high-level notations over static drawings to indicate intended motions, which are then interpreted by automatic methods (e.g., GenAI models) to generate animation keyframes. Sketched notations have long served as cognitive instruments for animators, capturing forces, poses, dynamics, paths, and other animation features. However, such notations are often contextual, ambiguous, and combinational based on our analysis of 135 real-world sketches. To facilitate interpretation, we first formalize these notations into a structured animation representation (i.e., source, path, and target). We then built an animation authoring system that translates high-level notations into the formalized intended animation, provides dynamic UI widgets for fine-grained parameter control, and establishes a closed feedback loop to resolve ambiguity. Finally, through a preliminary study with animators, we assess the usability of notational animating, reflect its affordance, and identify its contexts of use.

受賞
Honorable Mention
著者
Xinyu Shi
University of Waterloo, Waterloo, Ontario, Canada
Li-Yi Wei
Adobe Research, San Jose, California, United States
Nanxuan Zhao
Adobe Research, San Jose, California, United States
Jian Zhao
University of Waterloo, Waterloo, Ontario, Canada
Rubaiat Habib Kazi
Adobe Research, Seattle, Washington, United States
Towards AI as Colleagues: Multi-Agent System Improves Structured Ideation Processes
要旨

Most AI systems today are designed to manage tasks and execute predefined steps. This makes them effective for process coordination but limited in their ability to engage in joint problem-solving with humans or contribute new ideas. We introduce MultiColleagues, a multi-agent conversational system that shows how AI agents can act as colleagues by conversing with each other, sharing new ideas, and actively involving users in collaborative ideation processes. In a within-subjects study with 20 participants, we compared MultiColleagues to a single-agent baseline. Results show that MultiColleagues fostered stronger perceived social presence, and participants rated their outcomes as higher in quality and novelty, with more elaboration during ideation. These findings demonstrate the potential of AI agents to move beyond process partners toward colleagues that share intent, strengthen group dynamics, and collaborate with humans to advance ideas.

著者
Kexin Quan
University of Illinois, Urbana-Champaign, Champaign, Illinois, United States
Dina Albassam
University of Illinois, Urbana-Champaign, Champaign, Illinois, United States
Mengke Wu
University of Illinois Urbana-Champaign, Champaign, Illinois, United States
Zijian Ding
University of Maryland, College Park, Maryland, United States
Jessie Chin
University of Illinois Urbana-Champaign, Champaign, Illinois, United States