Silva: Interactively Assessing Machine Learning Fairness Using Causality

要旨

Machine learning models risk encoding unfairness on the part of their developers or data sources. However, assessing fairness is challenging as analysts might misidentify sources of bias, fail to notice them, or misapply metrics. In this paper we introduce Silva, a system for exploring potential sources of unfairness in datasets or machine learning models interactively. Silva directs user attention to relationships between attributes through a global causal view, provides interactive recommendations, presents intermediate results, and visualizes metrics. We describe the implementation of Silva, identify salient design and technical challenges, and provide an evaluation of the tool in comparison to an existing fairness optimization tool.

キーワード
Machine Learning Fairness
bias
interactive system
著者
Jing Nathan Yan
Cornell University, Ithaca, NY, USA
Ziwei Gu
Cornell University, Ithaca, NY, USA
Hubert Lin
Cornell University, Ithaca, NY, USA
Jeffrey M. Rzeszotarski
Cornell University, Ithaca, NY, USA
DOI

10.1145/3313831.3376447

論文URL

https://doi.org/10.1145/3313831.3376447

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Interactive ML & recommender systems

Paper session
312 NI'IHAU
5 件の発表
2020-04-28 01:00:00
2020-04-28 02:15:00
日本語まとめ
読み込み中…