Conversational artificial intelligence (CAI), which replicates human-to-human interaction as human-to-machine, is increasingly developed to address insufficient access to healthcare. In this paper, we use design fiction methods to speculate on ethical consequences of CAI that offers emotional support to complement or replace mental healthcare. Through a near-future news article about a fictional, failed CAI, we explore safety and privacy concerns associated with mismatches between what an emotional support CAI is advertised to do, what it technically can do, and how it is likely to be used. We pose the following questions to researchers, regulators, and developers: How might we jointly and effectively address the anticipatable safety and privacy risks that emotional support CAI pose, including formalizing ethical speculation processes? What streamlined and practically feasible measures can efficiently account for the most dangerous harms? How might differing stakeholder expectations about the CAI be bridged? Finally, in what scenarios is the decision not to design a CAI tool the most ethical or safest option? Content advisement: Contains discussion of disordered eating behaviors and intimate partner violence.
https://dl.acm.org/doi/10.1145/3706598.3713322
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)