Investigating the Capabilities and Limitations of Machine Learning for Identifying Bias in English Language Data with Information and Heritage Professionals

要旨

Despite numerous efforts to mitigate their biases, ML systems continue to harm already-marginalized people. While predominant ML approaches assume bias can be removed and fair models can be created, we show that these are not always possible, nor desirable, goals. We reframe the problem of ML bias by creating models to identify biased language, drawing attention to a dataset’s biases rather than trying to remove them. Then, through a workshop, we evaluated the models for a specific use case: workflows of information and heritage professionals. Our findings demonstrate the limitations of ML for identifying bias due to its contextual nature, the way in which approaches to mitigating it can simultaneously privilege and oppress different communities, and its inevitability. We demonstrate the need to expand ML approaches to bias and fairness, providing a mixed-methods approach to investigating the feasibility of removing bias or achieving fairness in a given ML use case.

受賞
Honorable Mention
著者
Lucy Havens
University of Edinburgh, Edinburgh, United Kingdom
Benjamin Bach
Inria, Bordeaux, France
Melissa Terras
University of Edinburgh, Edinburgh, United Kingdom
Beatrice Alex
University of Edinburgh, Edinburgh, United Kingdom
DOI

10.1145/3706598.3713217

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713217

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Societal Perspectives

Annex Hall F203
7 件の発表
2025-04-29 23:10:00
2025-04-30 00:40:00
日本語まとめ
読み込み中…