Characterizing User-Reported Risks across LLM Chatbots

要旨

As Large Language Models (LLMs) become increasingly integral to daily life, users are engaging with multiple LLM chatbots for various needs; however, prior research on LLM risks often remains lab-based or focuses on single LLMs like ChatGPT or singular risks like privacy. To gain a multi-risk, cross-chatbot understanding of user experiences, we analyze Reddit discussions around seven major LLM chatbots using the NIST AI Risk Management Framework. We find that user-reported risks are unevenly distributed and chatbot-specific: ChatGPT is associated with safety and fairness concerns, Gemini with privacy, and Claude with security and resilience. Less frequent risks, such as explainability and privacy, appear as user trade-offs, whereas prevalent risks like fairness are experienced as direct harms. Our findings underscore the need to operationalize chatbot-specific risk mitigation, moving beyond system-centered risk mitigation to human-centered interventions that align with users' lived experiences.

著者
Lingyao Li
University of South Florida, Tampa, Florida, United States
Renkai Ma
University of Cincinnati, Cincinnati, Ohio, United States
Zhaoqian Xue
University of Pennsylvania, Philadelphia, Pennsylvania, United States
Junjie Xiong
Missouri University of Science and Technology, Rolla, Missouri, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Ethics, Inclusion & Algorithmic Impact

P1 - Room 116
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00