GPTs are customized LLM apps built on OpenAI's large language model. Any individual or organization can use and create GPTs without needing programming skills. However, the rapid proliferation of over three million GPTs has raised significant privacy concerns. To explore the privacy perspectives of users and creators, we interviewed 23 GPT users with varying levels of creation experience. Our findings reveal blurred lines between user and creator roles and their understanding of GPT data flows. Participants raised concerns about data handling during collection, processing, and dissemination, alongside the lack of privacy regulations. Creators also worried about loss of their proprietary knowledge. In response, participants adopted practices like self-censoring input, evaluating GPT actions, and minimizing usage traces. Focusing on the dual role of user-creators, we find that expertise and responsibility shape privacy perceptions. Based on these insights, we propose practical recommendations to improve data transparency and platform regulations.
https://dl.acm.org/doi/10.1145/3706598.3713540
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)