Co-Writing with Opinionated Language Models Affects Users' Views

要旨

If large language models like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write -- and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.

受賞
Honorable Mention
著者
Maurice Jakesch
Cornell University, Ithaca, New York, United States
Advait Bhat
Microsoft Research India, Bangalore, India
Daniel Buschek
University of Bayreuth, Bayreuth, Germany
Lior Zalmanson
Tel Aviv University, Tel Aviv, Tel Aviv District, Israel
Mor Naaman
Cornell Tech, New York, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581196

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Conversation, Communication & Collaborative AI

Hall E
6 件の発表
2023-04-27 18:00:00
2023-04-27 19:30:00