What is Safety? Corporate Discourse, Power, and the Politics of Generative AI Safety

要旨

This work examines how leading generative artificial intelligence companies construct and communicate the concept of "safety" through public-facing documents. Drawing on critical discourse analysis, we analyze a corpus of corporate safety-related statements to explicate how authority, responsibility, and legitimacy are discursively established. These discursive strategies consolidate legitimacy for corporate actors, normalize safety as an experimental and anticipatory practice, and push a perceived participatory agenda toward safe technologies. We argue that uncritical uptake of these discourses risks reproducing corporate priorities and constraining alternative approaches to governance and design. The contribution of this work is twofold: first, to situate safety as a sociotechnical discourse that warrants critical examination; second, to caution human-computer interaction scholars against legitimizing corporate framings, instead foregrounding accountability, equity, and justice. By interrogating safety discourses as artifacts of power, this paper advances a critical agenda for human-computer interaction scholarship on artificial intelligence.

著者
Ankolika De
Pennsylvania State University, State College , Pennsylvania, United States
Gabriel Lima
Max Planck Institute for Security and Privacy, Bochum, Germany
Yixin Zou
Max Planck Institute for Security and Privacy, Bochum, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Technology, Safety and Justice

P1 - Room 112
7 件の発表
2026-04-16 18:00:00
2026-04-16 19:30:00