This work aims to explore how human assessments and AI predictions can be combined to identify misinformation on social media. To do so, we design a personalized AI which iteratively takes as training data a single user's assessment of content and predicts how the same user would assess other content. We conduct a user study in which participants interact with a personalized AI that learns their assessments of a feed of tweets, shows its predictions of whether a user would find other tweets (in)accurate, and evolves according to the user feedback. We study how users perceive such an AI, and whether the AI predictions influence users’ judgment. We find that this influence does exist and it grows larger over time, but it is reduced when users provide reasoning for their assessment. We draw from our empirical observations to identify design implications and directions for future work.
https://doi.org/10.1145/3544548.3581219
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)