Users often struggle to make rational privacy decisions within the notice and choice framework, primarily due to difficulties in understanding and processing privacy policies. Recent studies suggest that large language model (LLM)-based chatbots can improve privacy policy comprehension and privacy awareness. Meanwhile, the specific expectations and needs that users bring to LLM-based chatbots for rational privacy decision-making, and whether those needs are met, have been underexplored. Employing a technology probe and focus groups, we investigate the roles users expect from the chatbots and the needs that arise during interaction. We further interview three experts to corroborate our findings. Our study reveals a typology of three user-expected roles for LLM-based chatbots—Interpreter, Guardian, and Evaluator—along with satisfied and unmet needs within these roles. Finally, we outline implications that clarify where LLMs are—and are not—viable in privacy decision-making, while highlighting structural limitations of notice and choice.
ACM CHI Conference on Human Factors in Computing Systems