As Large Language Models (LLMs) become increasingly integral to daily life, users are engaging with multiple LLM chatbots for various needs; however, prior research on LLM risks often remains lab-based or focuses on single LLMs like ChatGPT or singular risks like privacy. To gain a multi-risk, cross-chatbot understanding of user experiences, we analyze Reddit discussions around seven major LLM chatbots using the NIST AI Risk Management Framework. We find that user-reported risks are unevenly distributed and chatbot-specific: ChatGPT is associated with safety and fairness concerns, Gemini with privacy, and Claude with security and resilience. Less frequent risks, such as explainability and privacy, appear as user trade-offs, whereas prevalent risks like fairness are experienced as direct harms. Our findings underscore the need to operationalize chatbot-specific risk mitigation, moving beyond system-centered risk mitigation to human-centered interventions that align with users' lived experiences.
ACM CHI Conference on Human Factors in Computing Systems