Evaluating the Experience of LGBTQ+ People Using Large Language Model Based Chatbots for Mental Health Support

要旨

LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support for this demographic. We interviewed 18 LGBTQ+ and 13 non-LGBTQ+ participants about their experiences with LLM-based chatbots for mental health needs. LGBTQ+ participants relied on these chatbots for mental health support, likely due to an absence of support in real life. Notably, while LLMs offer prompt support, they frequently fall short in grasping the nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn't the panacea. The deeper issue is entrenched in societal discrimination. Consequently, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community.

著者
Zilin Ma
Harvard University, Cambridge, Massachusetts, United States
Yiyang Mei
Emory University, Atlanta, Georgia, United States
Yinru Long
Vanderbilt University, Nashville, Tennessee, United States
Zhaoyuan Su
University of California Irvine, Irvine, California, United States
Krzysztof Z.. Gajos
Harvard University, Allston, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642482

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Social Activism B

313A
4 件の発表
2024-05-13 20:00:00
2024-05-13 21:20:00