Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools, they may risk causing a variety of harms, potentially disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harms, a term for the harms caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed participants evaluating write-ups from mock freelance writers. We asked participants to state whether they suspected the freelancers of using AI, to rank the quality of their writing, and to evaluate whether they should be hired. We found some support for perceptual harms against certain demographic groups. At the same time, perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.
https://dl.acm.org/doi/10.1145/3706598.3713897
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)