What Lies Beneath? Exploring the Impact of Underlying AI Model Updates in AI-Infused Systems

要旨

AI models are constantly evolving, with new versions released frequently. Human-AI interaction guidelines encourage notifying users about changes in model capabilities, ideally supported by thorough benchmarking. However, as AI systems integrate into domain-specific workflows, exhaustive benchmarking can become impractical, often resulting in silent or minimally communicated updates. This raises critical questions: Can users notice these updates? What cues do they rely on to distinguish between models? How do such changes affect their behavior and task performance? We address these questions through two studies in the context of facial recognition for historical photo identification: an online experiment examining users’ ability to detect model updates, followed by a diary study exploring perceptions in a real-world deployment. Our findings highlight challenges in noticing AI model updates, their impact on downstream user behavior and performance, and how they lead users to develop divergent folk theories. Drawing on these insights, we discuss strategies for effectively communicating model updates in AI-infused systems.

著者
Vikram Mohanty
Bosch Research North America, Sunnyvale, California, United States
Jude Lim
Independent Researcher, Arlington, Virginia, United States
Kurt Luther
Virginia Tech, Arlington, Virginia, United States
DOI

10.1145/3706598.3713751

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713751

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Human-Agent Interaction

Annex Hall F204
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…