Designing for Privacy

会議の名前
CHI 2024
Encoding Privacy: Sociotechnical Dynamics of Data Protection Compliance Work
要旨

How do developers shape data protection regulations when they are passed from the policy arena to technical teams for compliance? This study explores data protection compliance work (DPCW) as a sociotechnical process mediated by developers’ attitudes and experiences. We draw on 14 semi-structured interviews with individuals responsible for GDPR and/or CCPA compliance to examine how developers approach DPCW and the resulting implications for user privacy. We highlight three key ways in which developers can shape compliance: by creatively interpreting ambiguous regulatory requirements; by exploiting expectations of technical expertise and low accountability; and by reducing DPCW to a one-time project. We conclude by discussing the implications for both researchers and practitioners and by recommending how to conceptualize and conduct DPCW otherwise. This article adds specificity to understanding why and how developers' attitudes and experiences affect data protection regulations in the field.

著者
Rohan Grover
University of Southern California, Los Angeles, California, United States
論文URL

doi.org/10.1145/3613904.3642872

動画
Redesigning Privacy with User Feedback: The Case of Zoom Attendee Attention Tracking
要旨

Software engineers' unawareness of user feedback in earlier stages of design contributes to privacy issues in many products. Although extensive research exists on gathering and analyzing user feedback, there is limited understanding about how developers can integrate user feedback to improve product designs to better meet users' privacy expectations. We use Zoom's deprecated attendee attention tracking feature to explore issues with integrating user privacy feedback into software development, presenting public online critiques about this deprecated feature to 18 software engineers in semi-structured interviews and observing how they redesign this feature. Our results suggest that while integrating user feedback for privacy is potentially beneficial, it's also fraught with challenges of polarized design suggestions, confirmation bias, and limited scope of perceived responsibility.

著者
Tony W. Li
University of California, San Diego, La Jolla, California, United States
Arshia Arya
University of California San Diego, San Diego, California, United States
Haojian Jin
University of California San Diego, La Jolla, California, United States
論文URL

doi.org/10.1145/3613904.3642594

動画
Designing Accessible Obfuscation Support for Blind Individuals’ Visual Privacy Management
要旨

Blind individuals commonly share photos in everyday life. Despite substantial interest from the blind community in being able to independently obfuscate private information in photos, existing tools are designed without their inputs. In this study, we prototyped a preliminary screen reader-accessible obfuscation interface to probe for feedback and design insights. We implemented a version of the prototype through off-the-shelf AI models (e.g., SAM, BLIP2, ChatGPT) and a Wizard-of-Oz version that provides human-authored guidance. Through a user study with 12 blind participants who obfuscated diverse private photos using the prototype, we uncovered how they understood and approached visual private content manipulation, how they reacted to frictions such as inaccuracy with existing AI models and cognitive load, and how they envisioned such tools to be better designed to support their needs (e.g., guidelines for describing visual obfuscation effects, co-creative interaction design that respects blind users’ agency).

著者
Lotus Zhang
University of Washington, Seattle, Washington, United States
Abigale Stangl
University of Washington, Seattle, Washington, United States
Tanusree Sharma
University of Illinois at Urbana Champaign, Champaign, Illinois, United States
Yu-Yun Tseng
University of Colorado, Boulder, Colorado, United States
Inan Xu
University of California, santa cruz, California, United States
Danna Gurari
University of Colorado Boulder, Boulder, Colorado, United States
Yang Wang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Leah Findlater
University of Washington, Seattle, Washington, United States
論文URL

doi.org/10.1145/3613904.3642713

動画
An Empathy-Based Sandbox Approach to Bridge the Privacy Gap among Attitudes, Goals, Knowledge, and Behaviors
要旨

Managing privacy to reach privacy goals is challenging, as evidenced by the privacy attitude-behavior gap. Mitigating this discrepancy requires solutions that account for both system opaqueness and users' hesitations in testing different privacy settings due to fears of unintended data exposure. We introduce an empathy-based approach that allows users to experience how privacy attributes may alter system outcomes in a risk-free sandbox environment from the perspective of artificially generated personas. To generate realistic personas, we introduce a novel pipeline that augments the outputs of large language models (e.g., GPT-4) using few-shot learning, contextualization, and chain of thoughts. Our empirical studies demonstrated the adequate quality of generated personas and highlighted the changes in privacy-related applications (e.g., online advertising) caused by different personas. Furthermore, users demonstrated cognitive and emotional empathy towards the personas when interacting with our sandbox. We offered design implications for downstream applications in improving user privacy literacy.

著者
Chaoran Chen
University of Notre Dame, Notre Dame, Indiana, United States
Weijun Li
Zhejiang University, Hangzhou, China
Wenxin Song
The Chinese University of Hong Kong,Shenzhen, Shenzhen, Guangdong, China
Yanfang Ye
University of Notre Dame, Notre Dame, Indiana, United States
Yaxing Yao
Virginia Tech, Blacksburg, Virginia, United States
Toby Jia-Jun. Li
University of Notre Dame, Notre Dame, Indiana, United States
論文URL

doi.org/10.1145/3613904.3642363

動画