Feedback by Design: Understanding and Overcoming User Feedback Barriers in Conversational Agents

要旨

High-quality feedback is essential for effective human–AI interaction. It bridges knowledge gaps, corrects digressions, and shapes system behavior; both during interaction and throughout model development. Yet despite its importance, human feedback to AI is often infrequent and low quality. This gap motivates a critical examination of human feedback during interactions with AIs. To understand and overcome the challenges preventing users from giving high-quality feedback, we conducted two studies examining feedback dynamics between humans and conversational agents (CAs). Our formative study, through the lens of Grice’s maxims, identified four Feedback Barriers---Common Ground, Verifiability, Communication, and Informativeness---that prevent high-quality feedback by users. Building on these findings, we derive three design desiderata and show that systems incorporating scaffolds aligned with these desiderata enabled users to provide higher-quality feedback. Finally, we detail a call for action to the broader AI community for advances in Large Language Models capabilities to overcome Feedback Barriers.

著者
Nikhil Sharma
Johns Hopkins University, Baltimore, Maryland, United States
Zheng Zhang
Adobe Inc., San Jose, California, United States
Daniel Lee
Adobe Inc., San Jose , California, United States
Namita Krishnan
Adobe Inc., San Jose, California, United States
Guang-Jie Ren
Adobe Inc., San Jose, California, United States
Ziang Xiao
Johns Hopkins University, Baltimore, Maryland, United States
Yunyao Li
Adobe Inc., San Jose, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Conversational AI, Agency and Control

P1 - Room 118
7 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00