Generative Echo Chamber? Effect of LLM-Powered Search Systems on Diverse Information Seeking

要旨

Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people, and are believed to bring many benefits over conventional search. However, while decades of research and public discourse interrogated the risk of search systems in increasing selective exposure and creating echo chambers---limiting exposure to diverse opinions and leading to opinion polarization, little is known about such a risk of LLM-powered conversational search. We conduct two experiments to investigate: 1) whether and how LLM-powered conversational search increases selective exposure compared to conventional search; 2) whether and how LLMs with opinion biases that either reinforce or challenge the user's view change the effect. Overall, we found that participants engaged in more biased information querying with LLM-powered conversational search, and an opinionated LLM reinforcing their views exacerbated this bias. These results present critical implications for the development of LLMs and conversational search systems, and the policy governing these technologies.

受賞
Best Paper
著者
Nikhil Sharma
Johns Hopkins University, Baltimore, Maryland, United States
Q. Vera Liao
Microsoft Research, Montreal, Quebec, Canada
Ziang Xiao
Johns Hopkins University, Baltimore, Maryland, United States
論文URL

doi.org/10.1145/3613904.3642459

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Working with Data A

318B
5 件の発表
2024-05-13 20:00:00
2024-05-13 21:20:00