The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships

要旨

As conversational AI systems increasingly engage with people socially and emotionally, they bring notable risks and harms, particularly in human-AI relationships. However, these harms remain underexplored due to the private and sensitive nature of such interactions. This study investigates the harmful behaviors and roles of AI companions through an analysis of 35,390 conversation excerpts between 10,149 users and the AI companion Replika. We develop a taxonomy of AI companion harms encompassing six categories of harmful algorithmic behaviors: relational transgression, harassment, verbal abuse, self-harm, mis/disinformation, and privacy violations. These harmful behaviors stem from four distinct roles that AI plays: perpetrator, instigator, facilitator, and enabler. Our findings highlight relational harm as a critical yet understudied type of AI harm and emphasize the importance of examining AI's roles in harmful interactions to address root causes. We provide actionable insights for designing ethical and responsible AI companions that prioritize user safety and well-being.

著者
Renwen Zhang
National University of Singapore, Singapore, Singapore
Han Li
National University of Singapore, Singapore, Singapore
Han Meng
National University of Singapore, Singapore, Singapore, Singapore
Jinyuan Zhan
National University of Singapore, Singapore, Singapore
Hongyuan Gan
Hong Kong Baptist University, Hong Kong, China
YI-CHIEH LEE
National University of Singapore, Singapore, Singapore
DOI

10.1145/3706598.3713429

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713429

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: AI Ethics and Concerns

G314+G315
7 件の発表
2025-04-30 01:20:00
2025-04-30 02:50:00
日本語まとめ
読み込み中…