How Tech Workers Contend with Hazards of Humanlikeness in Generative AI

要旨

Generative AI's humanlike qualities are driving its rapid adoption in professional domains. However, this anthropomorphic appeal raises concerns from HCI and responsible AI scholars about potential hazards and harms, such as overtrust in system outputs. To investigate how technology workers navigate these humanlike qualities and anticipate emergent harms, we conducted focus groups with 30 professionals across six job functions (ML engineering, product policy, UX research and design, product management, technology writing, and communications). Our findings reveal an unsettled knowledge environment surrounding humanlike generative AI, where workers' varying perspectives illuminate a range of potential risks for individuals, knowledge work fields, and society. We argue that workers require comprehensive support, including clearer conceptions of ``humanlikeness'' to effectively mitigate these risks. To aid in mitigation strategies, we provide a conceptual map articulating the identified hazards and their connection to conflated notions of ``humanlikeness.''

著者
Mark Diaz
Google Research, New York City, New York, United States
Renee Shelby
Google Research, San Francisco, California, United States
Eric Corbett
Google Research, New York, New York, United States
Andrew Smart
Google, San Francisco, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: The Dark Sides of AI

Area 1 + 2 + 3: theatre
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00