Privacy Perceptions of Custom GPTs by Users and Creators

要旨

GPTs are customized LLM apps built on OpenAI's large language model. Any individual or organization can use and create GPTs without needing programming skills. However, the rapid proliferation of over three million GPTs has raised significant privacy concerns. To explore the privacy perspectives of users and creators, we interviewed 23 GPT users with varying levels of creation experience. Our findings reveal blurred lines between user and creator roles and their understanding of GPT data flows. Participants raised concerns about data handling during collection, processing, and dissemination, alongside the lack of privacy regulations. Creators also worried about loss of their proprietary knowledge. In response, participants adopted practices like self-censoring input, evaluating GPT actions, and minimizing usage traces. Focusing on the dual role of user-creators, we find that expertise and responsibility shape privacy perceptions. Based on these insights, we propose practical recommendations to improve data transparency and platform regulations.

著者
Rongjun Ma
Aalto University , Espoo, Finland
Caterina Maidhof
Universitat Politècnica de València (UPV), Valéncia , Spain
Juan Carlos Carrillo
Universidad Politécnica de Valencia, Valencia, Spain
Janne Lindqvist
Aalto University, Espoo, Finland
Jose Such
King's College London, London, United Kingdom
DOI

10.1145/3706598.3713540

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713540

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Data Privacy and Ethics

G304
7 件の発表
2025-04-30 20:10:00
2025-04-30 21:40:00
日本語まとめ
読み込み中…