Supporting Informed Self-Disclosure: Design Recommendations for Presenting AI-Estimates of Privacy Risks to Users

要旨

People candidly discuss sensitive topics online under the perceived safety of anonymity; yet, for many, this perceived safety is tenuous, as miscalibrated risk perceptions can lead to over-disclosure. Recent advances in Natural Language Processing (NLP) afford an unprecedented opportunity to present users with quantified disclosure-based re-identification risk — i.e., “population risk estimates” (PREs). How can PREs be presented to users in a way that promotes informed decision-making, mitigating risk without encouraging unnecessary self-censorship? Using design fictions and comic-boarding, we story-boarded five design concepts for presenting PREs to users and evaluated them through an online survey with 𝑁= 44 Reddit users. We found participants had detailed conceptions of how PREs may impact risk awareness and motivation, but envisioned needing additional context and support to effectively interpret and act on risks. We distill our findings into four key design recommendations for how best to present users with quantified privacy risks to support informed disclosure decision-making.

著者
Isadora Krsek
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Meryl Ye
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Wei Xu
Georgia Institute of Technology, Atlanta, Georgia, United States
Alan Ritter
Georgia Institute of Technology, Atlanta, Georgia, United States
Laura Dabbish
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sauvik Das
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Privacy Risks and Perceptions

P1 - Room 123
7 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00