Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling

要旨

Audits are critical mechanisms for identifying the risks and limitations of deployed artificial intelligence (AI) systems. However, the effective execution of AI audits remains incredibly difficult, and practitioners often need to make use of various tools to support their efforts. Drawing on interviews with 35 AI audit practitioners and a landscape analysis of 435 tools, we compare the current ecosystem of AI audit tooling to practitioner needs. While many tools are designed to help set standards and evaluate AI systems, they often fall short in supporting accountability. We outline challenges practitioners faced in their efforts to use AI audit tools and highlight areas for future tool development beyond evaluation—from harms discovery to advocacy. We conclude that the available resources do not currently support the full scope of AI audit practitioners' needs and recommend that the field move beyond tools for just evaluation and towards more comprehensive infrastructure for AI accountability.

著者
Victor Ojewale
Brown University , Providence , Rhode Island, United States
Ryan Steed
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Briana Vecchione
Data & Society Research Institute, New York, New York, United States
Abeba Birhane
Trinity College Dublin, Dublin, Ireland
Inioluwa Deborah Raji
University of California, Berkeley, Berkeley, California, United States
DOI

10.1145/3706598.3713301

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713301

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Perception of Systems

G401
5 件の発表
2025-04-29 18:00:00
2025-04-29 19:30:00
日本語まとめ
読み込み中…