How Accurate Does It Feel? - Human Perception of Different Types of Classification Mistakes

要旨

Supervised machine learning utilizes large datasets, often with ground truth labels annotated by humans. While some data points are easy to classify, others are hard to classify, which reduces the inter-annotator agreement. This causes noise for the classifier and might affect the user's perception of the classifier's performance. In our research, we investigated whether the classification difficulty of a data point influences how strongly a prediction mistake reduces the "perceived accuracy". In an experimental online study, 225 participants interacted with three fictive classifiers with equal accuracy (73%). The classifiers made prediction mistakes on three different types of data points (easy, difficult, impossible). After the interaction, participants judged the classifier's accuracy. We found that not all prediction mistakes reduced the perceived accuracy equally. Furthermore, the perceived accuracy differed significantly from the calculated accuracy. To conclude, accuracy and related measures seem unsuitable to represent how users perceive the performance of classifiers.

著者
Andrea Papenmeier
GESIS – Leibniz Institute for the Social Sciences, Cologne, Germany
Dagmar Kern
GESIS - Leibniz-Institute for the Social Sciences, Cologne, Germany
Daniel Hienert
GESIS - Leibniz Institute for the Social Sciences, Cologne, Germany
Yvonne Kammerer
Hochschule der Medien, Stuttgart, Germany
Christin Seifert
University of Duisburg-Essen, Essen, Germany
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501915

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Mistakes, Explainability

383-385
5 件の発表
2022-05-03 18:00:00
2022-05-03 19:15:00