fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks

要旨

To design with AI models, user experience (UX) designers must assess the fit between the model and user needs. Based on user research, they need to contextualize the model's behavior and potential failures within their product-specific data instances and user scenarios. However, our formative interviews with ten UX professionals revealed that such a proactive discovery of model limitations is challenging and time-intensive. Furthermore, designers often lack technical knowledge of AI and accessible exploration tools, which challenges their understanding of model capabilities and limitations. In this work, we introduced a \textit{failure-driven design} approach to AI, a workflow that encourages designers to explore model behavior and failure patterns early in the design process. The implementation of \system, a designer-centered failure exploration and analysis tool, supports designers in evaluating models and identifying failures across diverse user groups and scenarios. Our evaluation with UX practitioners shows that \system outperforms today's interactive model cards in assessing context-specific model performance.

著者
Steven Moore
Technical University Munich (TUM), Munich, Germany
Q. Vera Liao
Microsoft Research, Montreal, Quebec, Canada
Hariharan Subramonyam
Stanford University, Stanford, California, United States
論文URL

https://doi.org/10.1145/3544548.3581242

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: AI Trust, Transparency and Fairness

Room Y05+Y06
6 件の発表
2023-04-25 20:10:00
2023-04-25 21:35:00