Fictional Failures and Real-World Lessons: Ethical Speculation Through Design Fiction on Emotional Support Conversational AI

要旨

Conversational artificial intelligence (CAI), which replicates human-to-human interaction as human-to-machine, is increasingly developed to address insufficient access to healthcare. In this paper, we use design fiction methods to speculate on ethical consequences of CAI that offers emotional support to complement or replace mental healthcare. Through a near-future news article about a fictional, failed CAI, we explore safety and privacy concerns associated with mismatches between what an emotional support CAI is advertised to do, what it technically can do, and how it is likely to be used. We pose the following questions to researchers, regulators, and developers: How might we jointly and effectively address the anticipatable safety and privacy risks that emotional support CAI pose, including formalizing ethical speculation processes? What streamlined and practically feasible measures can efficiently account for the most dangerous harms? How might differing stakeholder expectations about the CAI be bridged? Finally, in what scenarios is the decision not to design a CAI tool the most ethical or safest option? Content advisement: Contains discussion of disordered eating behaviors and intimate partner violence.

著者
Faye Kollig
University of Colorado Boulder, Boulder, Colorado, United States
Jessica Pater
Parkview Health, Fort Wayne, Indiana, United States
Fayika Farhat Nova
Parkview Health, Fort Wayne, Indiana, United States
Casey Fiesler
University of Colorado, Boulder, Colorado, United States
DOI

10.1145/3706598.3713322

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713322

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Critics on AI

G316+G317
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…