As our most advanced technologies, such as AI, become both infrastructural and opaque, experts must educate and engage the broader public. To that end, we developed an Augmented Reality (AR) museum installation about facial recognition and data collection that served both as a medium of public education and as a platform for collecting multiple different kinds of data—though, notably, not facial or other biometric data—from more than 100,000 museum visitors. We explain our design process through four animating tensions: comfort/discomfort, simplicity/complexity, neutrality/critique, and the individual/communal. Using thematic analysis of interviews and surveys, we draw insights on how people exposed to problematic technologies in a ‘safe space’ such as a museum make sense of these issues: with levity and resignation but also reverence, often specifically rooted in local cultures. We conclude with implications of the guiding principle derived from this work: “using problematic technology to teach about problematic technology.”
https://dl.acm.org/doi/10.1145/3706598.3713710
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)