Generative AI and Perceptual Harms: Who’s Suspected of using LLMs?

要旨

Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools, they may risk causing a variety of harms, potentially disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harms, a term for the harms caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed participants evaluating write-ups from mock freelance writers. We asked participants to state whether they suspected the freelancers of using AI, to rank the quality of their writing, and to evaluate whether they should be hired. We found some support for perceptual harms against certain demographic groups. At the same time, perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.

著者
Kowe Kadoma
Cornell University, New York, New York, United States
Danaé Metaxa
University of Pennsylvania, Philadelphia, Pennsylvania, United States
Mor Naaman
Cornell Tech, New York, New York, United States
DOI

10.1145/3706598.3713897

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713897

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Privacy and Safety

G316+G317
7 件の発表
2025-04-29 23:10:00
2025-04-30 00:40:00
日本語まとめ
読み込み中…