Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships

要旨

While interpretability methods identify a model’s learned concepts, they overlook the relationships between concepts that make up its abstractions and inform its ability to generalize to new data. To assess whether models’ have learned human-aligned abstractions, we introduce abstraction alignment, a methodology to compare model behavior against formal human knowledge. Abstraction alignment externalizes domain-specific human knowledge as an abstraction graph, a set of pertinent concepts spanning levels of abstraction. Using the abstraction graph as a ground truth, abstraction alignment measures the alignment of a model’s behavior by determining how much of its uncertainty is accounted for by the human abstractions. By aggregating abstraction alignment across entire datasets, users can test alignment hypotheses, such as which human concepts the model has learned and where misalignments recur. In evaluations with experts, abstraction alignment differentiates seemingly similar errors, improves the verbosity of existing model-quality metrics, and uncovers improvements to current human abstractions.

著者
Angie Boggust
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Hyemin Bang
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Hendrik Strobelt
IBM Research AI, Cambridge, Massachusetts, United States
Arvind Satyanarayan
MIT, Cambridge, Massachusetts, United States
DOI

10.1145/3706598.3713406

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713406

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Explainable AI

G303
7 件の発表
2025-04-29 01:20:00
2025-04-29 02:50:00
日本語まとめ
読み込み中…