Vis Ex Machina: An Analysis of Trust in Human versus Algorithmically Generated Visualization Recommendations

要旨

More visualization systems are simplifying the data analysis process by automatically suggesting relevant visualizations. However, little work has been done to understand if users trust these automated recommendations. In this paper, we present the results of a crowd-sourced study exploring preferences and perceived quality of recommendations that have been positioned as either human-curated or algorithmically generated. We observe that while participants initially prefer human recommenders, their actions suggest an indifference for recommendation source when evaluating visualization recommendations. The relevance of presented information (e.g., the presence of certain data fields) was the most critical factor, followed by a belief in the recommender's ability to create accurate visualizations. Our findings suggest a general indifference towards the provenance of recommendations, and point to idiosyncratic definitions of visualization quality and trustworthiness that may not be captured by simple measures. We suggest that recommendation systems should be tailored to the information-foraging strategies of specific users.

著者
Rachael Zehrung
University of Maryland, College Park, Maryland, United States
Astha Singhal
University of Maryland, College Park, College Park, Maryland, United States
Michael Correll
Tableau Software, Seattle, Washington, United States
Leilani Battle
University of Maryland, College Park, Maryland, United States
DOI

10.1145/3411764.3445195

論文URL

https://doi.org/10.1145/3411764.3445195

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Understanding Visualizations

[A] Paper Room 09, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [B] Paper Room 09, 2021-05-13 01:00:00~2021-05-13 03:00:00 / [C] Paper Room 09, 2021-05-13 09:00:00~2021-05-13 11:00:00
Paper Room 09
14 件の発表
2021-05-12 17:00:00
2021-05-12 19:00:00
日本語まとめ
読み込み中…