Encoding Privacy: Sociotechnical Dynamics of Data Protection Compliance Work
説明

How do developers shape data protection regulations when they are passed from the policy arena to technical teams for compliance? This study explores data protection compliance work (DPCW) as a sociotechnical process mediated by developers’ attitudes and experiences. We draw on 14 semi-structured interviews with individuals responsible for GDPR and/or CCPA compliance to examine how developers approach DPCW and the resulting implications for user privacy. We highlight three key ways in which developers can shape compliance: by creatively interpreting ambiguous regulatory requirements; by exploiting expectations of technical expertise and low accountability; and by reducing DPCW to a one-time project. We conclude by discussing the implications for both researchers and practitioners and by recommending how to conceptualize and conduct DPCW otherwise. This article adds specificity to understanding why and how developers' attitudes and experiences affect data protection regulations in the field.

日本語まとめ
読み込み中…
読み込み中…
Redesigning Privacy with User Feedback: The Case of Zoom Attendee Attention Tracking
説明

Software engineers' unawareness of user feedback in earlier stages of design contributes to privacy issues in many products. Although extensive research exists on gathering and analyzing user feedback, there is limited understanding about how developers can integrate user feedback to improve product designs to better meet users' privacy expectations. We use Zoom's deprecated attendee attention tracking feature to explore issues with integrating user privacy feedback into software development, presenting public online critiques about this deprecated feature to 18 software engineers in semi-structured interviews and observing how they redesign this feature. Our results suggest that while integrating user feedback for privacy is potentially beneficial, it's also fraught with challenges of polarized design suggestions, confirmation bias, and limited scope of perceived responsibility.

日本語まとめ
読み込み中…
読み込み中…
Designing Accessible Obfuscation Support for Blind Individuals’ Visual Privacy Management
説明

Blind individuals commonly share photos in everyday life. Despite substantial interest from the blind community in being able to independently obfuscate private information in photos, existing tools are designed without their inputs. In this study, we prototyped a preliminary screen reader-accessible obfuscation interface to probe for feedback and design insights. We implemented a version of the prototype through off-the-shelf AI models (e.g., SAM, BLIP2, ChatGPT) and a Wizard-of-Oz version that provides human-authored guidance. Through a user study with 12 blind participants who obfuscated diverse private photos using the prototype, we uncovered how they understood and approached visual private content manipulation, how they reacted to frictions such as inaccuracy with existing AI models and cognitive load, and how they envisioned such tools to be better designed to support their needs (e.g., guidelines for describing visual obfuscation effects, co-creative interaction design that respects blind users’ agency).

日本語まとめ
読み込み中…
読み込み中…
An Empathy-Based Sandbox Approach to Bridge the Privacy Gap among Attitudes, Goals, Knowledge, and Behaviors
説明

Managing privacy to reach privacy goals is challenging, as evidenced by the privacy attitude-behavior gap. Mitigating this discrepancy requires solutions that account for both system opaqueness and users' hesitations in testing different privacy settings due to fears of unintended data exposure. We introduce an empathy-based approach that allows users to experience how privacy attributes may alter system outcomes in a risk-free sandbox environment from the perspective of artificially generated personas. To generate realistic personas, we introduce a novel pipeline that augments the outputs of large language models (e.g., GPT-4) using few-shot learning, contextualization, and chain of thoughts. Our empirical studies demonstrated the adequate quality of generated personas and highlighted the changes in privacy-related applications (e.g., online advertising) caused by different personas. Furthermore, users demonstrated cognitive and emotional empathy towards the personas when interacting with our sandbox. We offered design implications for downstream applications in improving user privacy literacy.

日本語まとめ
読み込み中…
読み込み中…