Visualization Literacy & Trust

会議の名前
CHI 2023
Is this AI trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User Trust
要旨

To promote data transparency, frameworks such as CrowdWorkSheets encourage documentation of annotation practices on the interfaces of AI systems, but we do not know how they affect user experience. Will the quality of labeling affect perceived credibility of training data? Does the source of annotation matter? Will a credible dataset persuade users to trust a system even if it shows racial biases in its predictions? To find out, we conducted a user study (N = 430) with a prototype of a classification system, using a 2 (labeling quality: high vs. low) × 4 (source: others-as-source vs. self-as-source cue vs. self-as-source voluntary action, vs. self-as-source forced action) × 3 (AI performance: none vs. biased vs. unbiased) experiment. We found that high-quality labeling leads to higher perceived training data credibility, which in turn enhances users’ trust in AI, but not when the system shows bias. Practical implications for explainable and ethical AI interfaces are discussed.

著者
Cheng Chen
Elon University, Elon, North Carolina, United States
S. Shyam Sundar
The Pennsylvania State University, University Park, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3580805

動画
Bones of Contention: Social Acceptance of Digital Cemetery Technologies
要旨

Digital technologies play an increasingly prominent role in memorialisation, but their use at cemeteries has been criticised for lack of sensitivity. Designers of digital cemetery technologies (cemtech) face a high risk of causing offence or failing to attract users due to social norms of memorial sites. To map this area of design and its pitfalls, we first developed a typology of cemtech through a review of examples from around the world. We then evaluated social acceptance of various types of cemtech through a survey of 1,053 Australian residents. Younger people were more accepting of cemtech than older people. Acceptance was highest for cemtech with three characteristics: familiarity, intimacy of user group and peacefulness. Through a reflexive thematic analysis, we identified four attitudinal dichotomies that explain divergent reactions to cemtech: Expands/Impedes, Public/Private, Lively/Restful and Pragmatic/Affective. We conclude with a discussion of how this work can assist designers of public memorialisation technologies.

著者
Fraser Allison
University of Melbourne, Parkville, Victoria, Australia
Bjorn Nansen
University of Melbourne, Melbourne, Australia
Martin Gibbs
The University of Melbourne, Melbourne, Victoria, Australia
Michael Arnold
The University of Melbourne, Melbourne, VIC, Australia
論文URL

https://doi.org/10.1145/3544548.3581520

動画
When Recommender Systems Snoop into Social Media, Users Trust them Less for Health Advice
要旨

Recommender systems (RS) have become increasingly vital for guiding health actions. While traditional systems filter content based on either demographics, personal history of activities, or preferences of other users, newer systems use social media information to personalize recommendations, based either on the users’ own activities and/or those of their friends on social media platforms. However, we do not know if these approaches differ in their persuasiveness. To find out, we conducted a user study of a fitness plan recommender system (N = 341), wherein participants were randomly assigned to one of six personalization approaches, with half of them given a choice to switch to a different approach. Data revealed that social media-based personalization threatens users’ identity and increases privacy concerns. Users prefer personalized health recommendations based on their own preferences. Choice enhances trust by providing users with a greater sense of agency and lowering their privacy concerns. These findings provide design implications for RS, especially in the preventive health domain.

著者
Yuan Sun
The Pennsylvania State University , State College, Pennsylvania, United States
Magdalayna Drivas
University of Southern California, Los Angeles, California, United States
Mengqi Liao
The Pennsylvania State University, State College, Pennsylvania, United States
S. Shyam Sundar
The Pennsylvania State University, University Park, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3581123

動画
Misleading Beyond Visual Tricks: How People Actually Lie with Charts
要旨

Data visualizations can empower an audience to make informed decisions. At the same time, deceptive representations of data can lead to inaccurate interpretations while still providing an illusion of data-driven insights. Existing research on misleading visualizations primarily focuses on examples of charts and techniques previously reported to be deceptive. These approaches do not necessarily describe how charts mislead the general population in practice. We instead present an analysis of data visualizations found in a real-world discourse of a significant global event—Twitter posts with visualizations related to the COVID-19 pandemic. Our work shows that, contrary to conventional wisdom, violations of visualization design guidelines are not the dominant way people mislead with charts. Specifically, they do not disproportionately lead to reasoning errors in posters' arguments. Through a series of examples, we present common reasoning errors and discuss how even faithfully plotted data visualizations can be used to support misinformation.

著者
Maxim Lisnic
University of Utah, Salt Lake City, Utah, United States
Cole Polychronis
University of Utah, Salt Lake City, Utah, United States
Alexander Lex
University of Utah, Salt Lake City, Utah, United States
Marina Kogan
University of Utah, Salt Lake City, Utah, United States
論文URL

https://doi.org/10.1145/3544548.3580910

動画
Who Do We Mean When We Talk About Visualization Novices?
要旨

As more people rely on visualization to inform their personal and collective decisions, researchers have focused on a broader range of audiences, including "novices.'' But successfully applying, interrogating, or advancing visualization research for novices demands a clear understanding of what "novice'' means in theory and practice. Misinterpreting who a "novice'' is could lead to misapplying guidelines and overgeneralizing results. In this paper, we investigated how visualization researchers define novices and how they evaluate visualizations intended for novices. We analyzed 79 visualization papers that used "novice,'' "non-expert,'' "laypeople,'' or "general public'' in their titles or abstracts. We found ambiguity within papers and disagreement between papers regarding what defines a novice. Furthermore, we found a mismatch between the broad language describing novices and the narrow population representing them in evaluations (i.e., young people, students, and US residents). We suggest directions for inclusively supporting novices in both theory and practice.

受賞
Best Paper
著者
Alyxander Burns
Mount Holyoke College, South Hadley, Massachusetts, United States
Christiana Lee
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Ria Chawla
University of Massachusetts Amherst, Amherst, Massachusetts, United States
Evan Peck
Bucknell University, Lewisburg, Pennsylvania, United States
Narges Mahyar
University of Massachusetts Amherst, Amherst, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3581524

動画
CALVI: Critical Thinking Assessment for Literacy in Visualizations
要旨

Visualization misinformation is a prevalent problem, and combating it requires understanding people’s ability to read, interpret, and reason about erroneous or potentially misleading visualizations, which lacks a reliable measurement: existing visualization literacy tests focus on well-formed visualizations. We systematically develop an assessment for this ability by: (1) developing a precise definition of misleaders (decisions made in the construction of visualizations that can lead to conclusions not supported by the data), (2) constructing initial test items using a design space of misleaders and chart types, (3) trying out the provisional test on 497 participants, and (4) analyzing the test tryout results and refining the items using Item Response Theory, qualitative analysis, a wrong-due-to-misleader score, and the content validity index. Our final bank of 45 items shows high reliability, and we provide item bank usage recommendations for future tests and different use cases. Related materials are available at: https://osf.io/pv67z/.

受賞
Honorable Mention
著者
Lily W.. Ge
Northwestern University, Evanston, Illinois, United States
Yuan Cui
Northwestern University, Evanston, Illinois, United States
Matthew Kay
Northwestern University, Chicago, Illinois, United States
論文URL

https://doi.org/10.1145/3544548.3581406

動画