"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents

要旨

The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.

著者
Zhiping Zhang
Khoury College of Computer Sciences, Boston, Massachusetts, United States
Michelle Jia
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hao-Ping (Hank) Lee
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Bingsheng Yao
Rensselaer Polytechnic Institute, Troy, New York, United States
Sauvik Das
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Ada Lerner
Northeastern University, Boston, Massachusetts, United States
Dakuo Wang
Northeastern University, Boston, Massachusetts, United States
Tianshi Li
Northeastern University, Boston, Massachusetts, United States
論文URL

https://doi.org/10.1145/3613904.3642385

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Conversational Agents

316C
5 件の発表
2024-05-14 23:00:00
2024-05-15 00:20:00