Exploring the Use of Personalized AI for Identifying Misinformation on Social Media

要旨

This work aims to explore how human assessments and AI predictions can be combined to identify misinformation on social media. To do so, we design a personalized AI which iteratively takes as training data a single user's assessment of content and predicts how the same user would assess other content. We conduct a user study in which participants interact with a personalized AI that learns their assessments of a feed of tweets, shows its predictions of whether a user would find other tweets (in)accurate, and evolves according to the user feedback. We study how users perceive such an AI, and whether the AI predictions influence users’ judgment. We find that this influence does exist and it grows larger over time, but it is reduced when users provide reasoning for their assessment. We draw from our empirical observations to identify design implications and directions for future work.

著者
Farnaz Jahanbakhsh
Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States
Yannis Katsis
IBM Research, Almaden, San Jose, California, United States
Dakuo Wang
Northeastern University, Boston, Massachusetts, United States
Lucian Popa
IBM Research - Almaden, San Jose, California, United States
Michael Muller
IBM Research, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3581219

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Communication and Social Good

Hall G2
6 件の発表
2023-04-26 23:30:00
2023-04-27 00:55:00