Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence

要旨

Biases in Artificial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users’ perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.

著者
Chien Wen (Tina) Yuan
National Taiwan Normal University, Taipei City, Taiwan
Nanyi Bi
National Taiwan University, Taipei, Taiwan
Ya-Fang Lin
Penn State University, State college, Pennsylvania, United States
Yuan Hsien. Tseng
National Taiwan Normal University, Taipei City, --- Select One ---, Taiwan
論文URL

https://doi.org/10.1145/3544548.3580945

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Explainable, Responsible, Manageable AI

Hall D
6 件の発表
2023-04-26 18:00:00
2023-04-26 19:30:00