Privacy Policies

会議の名前
CHI 2026
Privacy and Trust vs. Utility: Adoption of Commercial vs. Institutional AI assistants Among University Users
要旨

Generative AI assistants are being rapidly adopted in universities, supporting students in coursework and faculty in academic tasks. To address privacy concerns, some institutions introduced institutional AI assistants, typically wrappers around commercial models (e.g., ChatGPT) with added governance and data protections. However, university-affiliated users appear to rely more on commercial tools (e.g., ChatGPT, Gemini). We conducted a survey (n=260) at one U.S. university to examine preferences, usage scenarios, and perceptions of trust, privacy, and experience with institutional and commercial AI. Participants trusted institutional tools more and considered them more privacy protective, nevertheless commercial tools were often favored for writing, programming, and learning due to their features and utility. Findings reveal a trade-off between privacy and trust versus utility, highlighting complementary adoption patterns and design opportunities for both institutional and commercial AI in higher education.

著者
Yuting Yang
University of Michigan, Ann Arbor, Michigan, United States
Zixin Wang
University of Michigan, Ann Arbor, Ann Arbor, Michigan, United States
Rongjun Ma
Aalto University , Espoo, Finland
Florian Schaub
University of Michigan, Ann Arbor, Michigan, United States
A Scoping Review and Guidelines on Privacy Policy's Visualization from an HCI Perspective
要旨

Privacy Policies are a cornerstone of informed consent, yet a persistent gap exists between their legal intent and practical efficacy. Despite decades of research proposing various visualizations, user comprehension remains low, and designs rarely see widespread adoption. To understand this landscape and chart a path forward, we synthesized 65 top-tier papers using a framework adapted from user-centered design lifecycles. Our analysis presented four findings of the field's evolution: (1) trade-off between information load and decision efficacy, which shows a shift from augmenting disclosures to cognitive load management, (2) co-evolutionary dynamic of design and automation, revealing that designs such as context-awareness drove automation needs, while LLM breakthroughs enable the semantic interpretation required to realize those designs, (3) tension between generality and specificity, highlighting the divergence between standardized solutions and the increasing necessity for specialized interaction in IoT and immersive environments, and (4) balancing stakeholder opinions, where visualization efficacy is constrained by the interplay of regulatory mandates, developer capabilities and provider incentives.

著者
Shuning Zhang
Tsinghua University, Beijing, China
Eve He
Independent Researcher, Madison, Wisconsin, United States
Sixing Tao
University of Washington, Seattle, Washington, United States
Yuting Yang
University of Michigan, Ann Arbor, Michigan, United States
Ying Ma
The University of Melbourne, Melbourne, Australia
Ailei Wang
Tsinghua University, Beijing, China
Xin Yi
Tsinghua University, Beijing, China
Hewu Li
Tsinghua University, Beijing, China
Scrollytelling as an Alternative Format for Privacy Policies
要旨

Privacy policies are long, complex, and rarely read, which limits their effectiveness in informed consent. We investigate scrollytelling, a scroll-driven narrative approach, as a privacy policy presentation format. We built a prototype that interleaves the full policy text with animated visuals to create a dynamic reading experience. In an online study (N=454), we compared our tool against text, two nutrition-label variants, and a standalone interactive visualization. Scrollytelling improved user experience over text, yielding higher engagement, lower cognitive load, greater willingness to adopt the format, and increased perceived clarity. It also matched other formats on comprehension accuracy and confidence, with only one nutrition-label variant performing slightly better. Changes in perceived understanding, transparency, and trust were small and statistically inconclusive. These findings suggest that scrollytelling can preserve comprehension while enhancing the experience of policy reading. We discuss design implications for accessible policy communication and identify directions for increasing transparency and user trust.

著者
Gonzalo Gabriel. Méndez
Universidad Politécnica de Valencia, Valencia, Spain
Jose Such
INGENIO (CSIC-UPV), Valencia, Spain
BAIT: Visual-illusion-inspired Privacy Preservation for Mobile Data Visualization
要旨

With the prevalence of mobile data visualizations, there have been growing concerns about their privacy risks, especially shoulder surfing attacks. Inspired by prior research on visual illusion, we propose BAIT, a novel approach to automatically generate privacy-preserving visualizations by stacking a decoy visualization over a given visualization. It allows visualization owners at proximity to clearly discern the original visualization and makes shoulder surfers at a distance be misled by the decoy visualization, by adjusting different visual channels of a decoy visualization (e.g., shape, position, tilt, size, color and spatial frequency). We explicitly model human perception effect at different viewing distances to optimize the decoy visualization design. Privacy-preserving examples and two in-depth user studies demonstrate the effectiveness of BAIT in both controlled lab study and real-world scenarios.

著者
Sizhe Cheng
Nanyang Technological University, Singapore, Singapore
Songheng Zhang
Singapore Management University, Singapore, Singapore, Singapore
Dong Ma
Singapore Management University, Singapore, Singapore
Yong WANG
Nanyang Technological University, Singapore, Singapore, Singapore
Helping Johnny Make Sense of Privacy Policies with LLMs
要旨

Understanding and engaging with privacy policies is crucial for online privacy, yet these documents remain notoriously complex and difficult to navigate. We present PRISMe, an interactive browser extension that combines LLM-based policy assessment with a dashboard and customizable chat interface, enabling users to skim quick overviews or explore policy details in depth while browsing. We conduct a user study (N=22) with participants of diverse privacy knowledge to investigate how users interpret the tool's explanations and how it shapes their engagement with privacy policies, identifying distinct interaction patterns. Participants valued the clear overviews and conversational depth, but flagged some issues, particularly adversarial robustness and hallucination risks. Thus, we investigate how a retrieval-augmented generation (RAG) approach can alleviate issues by re-running the chat queries from the study. Our findings surface design challenges as well as technical trade-offs, contributing actionable insights for developing future user-centered, trustworthy privacy policy analysis tools.

著者
Vincent Freiberger
Leipzig University, Leipzig, Germany
Arthur Fleig
Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Leipzig University, Leipzig, Germany
Erik Buchmann
Leipzig University, Leipzig, Germany
"Privacy across the boundary": Examining Perceived Privacy Risk Across Data Transmission and Sharing Ranges of Smart Home Personal Assistants
要旨

As Smart Home Personal Assistants (SPAs) evolve into social agents, understanding user privacy necessitates interpersonal communication frameworks, such as Privacy Boundary Theory (PBT). To ground our investigation, our three-phase preliminary study (1) identified transmission and sharing ranges as key boundary-related risk factors, (2) categorized relevant SPA functions and data types, and (3) analyzed commercial practices, revealing widespread data sharing and non-transparent safeguards. A subsequent mixed-methods study (N=412 survey, N=40 interviews among the survey participants) assessed users' perceived privacy risks across data types, transmission ranges and sharing ranges. Results demonstrate a significant, non-linear escalation in perceived risk when data crosses two critical boundaries: the `public network' (transmission) and `third parties' (sharing). This boundary effect holds across data types and demographics. Furthermore, risk perception is modulated by data attributes, and contextual privacy calculus. Conversely, anonymization show limited efficacy especially for third-party sharing, a finding attributed to user distrust. These findings empirically ground PBT in SPA context and inform design of boundary-aware privacy protection.

著者
Shuning Zhang
Tsinghua University, Beijing, China
Shixuan Li
Tsinghua University, Beijing, China
Haobin Xing
Tsinghua University, Beijing, China
Jiarui Liu
Tsinghua University, Beijing, China
Yan Kong
CS, Beijing, China, China
Xin Yi
Tsinghua University, Beijing, China
Kanye Ye WANG
University of Macau, Macao, China
Hewu Li
Tsinghua University, Beijing, China
Tinker, Tailor, Trust: How Developers Create Privacy Policies With and Without AI
要旨

For mobile developers to comply with privacy regulations, they must create privacy policies that accurately describe their apps' data practices. This requires a complete understanding of their apps' behaviors, including those of embedded third-party SDKs. Despite the complexity of this process, little is known about how privacy policies are created and validated. To investigate, we interviewed 20 developers from around the world about their processes, also observing them use a large language model (LLM) to prepare privacy policies for their apps. We found that developers struggle with collecting information about third-party SDKs, even when they use LLMs, and feel uncertain about the legal validity of LLM outputs. Many developers do not seek legal assistance and believe that, as long as app stores accept their privacy policies, they are protected. Our findings suggest that reliance on LLMs and developers' desire to externalize validation may result in increasingly unreliable privacy policies.

著者
Shiva Mayahi
New Jersey Institute of Technology, Newark, New Jersey, United States
Noura Alomar
King Saud University, Riyadh, Saudi Arabia
Nathan Malkin
New Jersey Institute of Technology, Newark, New Jersey, United States