Why the Fine, AI? The Effect of Explanation Level on Citizens' Fairness Perception of AI-based Discretion in Public Administrations

要旨

The integration of Artificial Intelligence into decision-making processes within public administration extends to AI-systems that exercise administrative discretion. This raises fairness concerns among citizens, possibly leading to AI-systems abandonment. Uncertainty persists regarding explanation elements impacting citizens' perception of fairness and technology adoption level. In a video-vignette online-survey (N=847), we investigated the impact of explanation levels on citizens' perceptions of informational fairness, distributive fairness, and system adoption level. We enhanced explanations in three stages: none, factor explanations, culminating in factor importance explanations. We found that more detailed explanations improved informational and distributive fairness perceptions, but did not affect citizens' willingness to reuse the system. Interestingly, citizens with higher AI-literacy expressed greater willingness to adopt the system, regardless of the explanation levels. Qualitative findings revealed that greater human involvement and appeal mechanisms could positively influence citizens' perceptions. Our findings highlight the importance of citizen-centered design of AI-based decision-making in public administration.

著者
Saja Aljuneidi
OFFIS - Institute for Information Technology, Oldenburg, Germany
Wilko Heuten
OFFIS - Institute for Information Technology, Oldenburg, Germany
Larbi Abdenebaoui
OFFIS - Institute for Information Technology, Oldenburg, Germany
Maria K. Wolters
OFFIS - Institute for Information Technology, Oldenburg, Germany
Susanne Boll
University of Oldenburg, Oldenburg, Germany
論文URL

doi.org/10.1145/3613904.3642535

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Explainable AI

313B
5 件の発表
2024-05-16 20:00:00
2024-05-16 21:20:00