Extant literature has proposed an important role for trust in moderating people's willingness to disclose personal information, but there is scant HCI literature that deeply explores the relationship between privacy and trust in apparent privacy paradox circumstances. Attending to this gap, this paper reports a qualitative study examining how people account for continuing to use services that conflict with their stated privacy preferences, and how trust features in these accounts. Our findings undermine the notion that individuals engage in strategic thinking about privacy, raising important questions regarding the explanatory power of the well-known privacy calculus model and its proposed relationship between privacy and trust. Finding evidence of \textit{hopeful} trust in participants' accounts, we argue that trust allows people to morally account for their `paradoxical' information disclosure behavior. We propose that effecting greater alignment between people's privacy attitudes and privacy behavior---or `un-paradoxing privacy'---will require greater regulatory assurances of privacy.
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)