Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning

要旨

Machine learning models with high accuracy on test data can still produce systematic failures, such as harmful biases and safety issues, when deployed in the real world. To detect and mitigate such failures, practitioners run behavioral evaluation of their models, checking model outputs for specific types of inputs. Behavioral evaluation is important but challenging, requiring that practitioners discover real-world patterns and validate systematic failures. We conducted 18 semi-structured interviews with ML practitioners to better understand the challenges of behavioral evaluation and found that it is a collaborative, use-case-first process that is not adequately supported by existing task- and domain-specific tools. Using these findings, we designed Zeno, a general-purpose framework for visualizing and testing AI systems across diverse use cases. In four case studies with participants using Zeno on real-world models, we found that practitioners were able to reproduce previous manual analyses and discover new systematic failures.

著者
Ángel Alexander Cabrera
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Erica Fu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Donald R. Bertucci
Oregon State University, Corvallis, Oregon, United States
Kenneth Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Ameet Talwalkar
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jason I. Hong
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Adam Perer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3581268

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Interactive Learning Support Systems

Hall G1
6 件の発表
2023-04-26 23:30:00
2023-04-27 00:55:00