Interactive Debugging and Steering of Multi-Agent AI Systems

要旨

Fully autonomous teams of LLM-powered AI agents are emerging that collaborate to perform complex tasks for users. What challenges do developers face when trying to build and debug these AI agent teams? In formative interviews with five AI agent developers, we identify core challenges: difficulty reviewing long agent conversations to localize errors, lack of support in current tools for interactive debugging, and the need for tool support to iterate on agent configuration. Based on these needs, we developed an interactive multi-agent debugging tool, AGDebugger, with a UI for browsing and sending messages, the ability to edit and reset prior agent messages, and an overview visualization for navigating complex message histories. In a two-part user study with 14 participants, we identify common user strategies for steering agents and highlight the importance of interactive message resets for debugging. Our studies deepen understanding of interfaces for debugging increasingly important agentic workflows.

著者
Will Epperson
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Gagan Bansal
Microsoft Research, Redmond, Washington, United States
Victor C. Dibia
Microsoft Research, Redmond, Washington, United States
Adam Fourney
Microsoft Research, Redmond, Washington, United States
Jack Gerrits
Microsoft Research, Redmond, Washington, United States
Erkang (Eric) Zhu
Microsoft Research, Redmond, Washington, United States
Saleema Amershi
Microsoft Research AI, Redmond, Washington, United States
DOI

10.1145/3706598.3713581

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713581

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Coding and Development

G418+G419
7 件の発表
2025-04-29 23:10:00
2025-04-30 00:40:00
日本語まとめ
読み込み中…