Impact of Privacy Protection Methods of Lifelogs on Remembered Memories

要旨

Lifelogging is traditionally used for memory augmentation. However, recent research shows that users' trust in the completeness and accuracy of lifelogs might skew their memories. Privacy-protection alterations such as body blurring and content deletion are commonly applied to photos to circumvent capturing sensitive information. However, their impact on how users remember memories remain unclear. To this end, we conduct a white-hat memory attack and report on an iterative experiment (N=21) to compare the impact of viewing 1) unaltered lifelogs, 2) blurred lifelogs, and 3) a subset of the lifelogs after deleting private ones, on confidently remembering memories. Findings indicate that all the privacy methods impact memories' quality similarly and that users tend to change their answers in recognition more than recall scenarios. Results also show that users have high confidence in their remembered content across all privacy methods. Our work raises awareness about the mindful designing of technological interventions.

著者
Passant ElAgroudy
German Research Centre for Artificial Intelligence (DFKI), Kaiserslautern, Germany
Mohamed Khamis
University of Glasgow, Glasgow, United Kingdom
Florian Mathis
University of Glasgow, Glasgow, United Kingdom
Diana Irmscher
LMU Munich, Munich, Germany
Ekta Sood
University of Stuttgart, Stuttgart , Germany
Andreas Bulling
University of Stuttgart, Stuttgart, Germany
Albrecht Schmidt
LMU Munich, Munich, Germany
論文URL

https://doi.org/10.1145/3544548.3581565

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Metrics and Methods

Room Y01+Y02
6 件の発表
2023-04-27 01:35:00
2023-04-27 03:00:00