This paper draws attention to new complexities of deploying AI systems to sensitive contexts, such as welfare allocation. AI is increasingly used in public administration with the promise of improving decision-making. To succeed, it needs all the criteria used as part of decisions, formal and informal. In this paper, we empirically explore the informal classifications used by caseworkers to make unemployed welfare seekers ‘fit’ into the formal categories in a Danish job centre. Our findings show that the classifications used by caseworkers are documentable, and hence traceable to AI. To the caseworkers, however, classifications are at odds with the stable explanations assumed by any recording system as they involve negotiated and situated judgments of people’s character. Thus, for moral reasons, caseworkers find them ill-suited for formal representation and would never write them down. As a result, AI is denuded of the real-world (and real work) nature of decision-making. This is imperative to CSCW as it is not only about whether AI can ‘do’ decision-making, as previous research suggests. In this paper, we show that problems may also be caused by the unwillingness of people to provide the data these systems need. It is the purpose of this paper to present the empirical results of this research, followed by a discussion of implications for AI-supported practice and future research.
https://doi.org/10.1145/3449176
The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing