With the surge in literature focusing on the assessment and mitigation of unfair outcomes in algorithms, several open source `fairness toolkits' recently emerged to make such methods widely accessible. However, little studied are the differences in approach and capabilities of existing fairness toolkits, and their fit-for-purpose in commercial contexts. Towards this, this paper identifies the gaps between the existing open source fairness toolkit capabilities and the industry practitioners' needs. Specifically, we undertake a comparative assessment of the strengths and weaknesses of six prominent open source fairness toolkits, and investigate the current landscape and gaps in fairness toolkits through an exploratory focus group, a semi-structured interview, and an anonymous survey of data science / machine learning (ML) practitioners. We identify several gaps between the toolkits' capabilities and practitioner needs, highlighting areas requiring attention and future directions towards tooling that better support "fairness in practice."
https://doi.org/10.1145/3411764.3445261
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)