Making Sense & Decisions with Visualization

会議の名前
CHI 2023
Why Combining Text and Visualization Could Improve Bayesian Reasoning: A Cognitive Load Perspective
要旨

Investigations into using visualization to improve Bayesian reasoning and advance risk communication have produced mixed results, suggesting that cognitive ability might affect how users perform with different presentation formats. Our work examines the cognitive load elicited when solving Bayesian problems using icon arrays, text, and a juxtaposition of text and icon arrays. We used a three-pronged approach to capture a nuanced picture of cognitive demand and measure differences in working memory capacity, performance under divided attention using a dual-task paradigm, and subjective ratings of self-reported effort. We found that individuals with low working memory capacity made fewer errors and experienced less subjective workload when the problem contained an icon array compared to text alone, showing that visualization improves accuracy while exerting less cognitive demand. We believe these findings can considerably impact accessible risk communication, especially for individuals with low working memory capacity.

著者
Melanie Bancilhon
Washington University in St Louis, St Louis, Missouri, United States
Amanda Wright
Washington University in St. Louis, St Louis, Missouri, United States
Sunwoo Ha
Washington University in St. Louis, St. Louis, Missouri, United States
R. Jordan Crouser
Smith College, Northampton, Massachusetts, United States
Alvitta Ottley
Washington University in St. Louis, St. Louis, Missouri, United States
論文URL

https://doi.org/10.1145/3544548.3581218

動画
GVQA: Learning to Answer Questions about Graphs with Visualizations via Knowledge Base
要旨

Graphs are common charts used to represent the topological relationship between nodes. It is a powerful tool for data analysis and information retrieval tasks involve asking questions about graphs. In formative study, we found that questions for graphs are not only about the relationship of nodes but also about the properties of graph elements. We propose a pipeline to answer natural language questions about graph visualizations and generate visual answers. We first extract the data from graphs and convert them into GML format. We design data structures to encode graph information and convert them into an knowledge base. We then extract topic entities from questions. We feed questions, entities and knowledge bases into our question-answer model to obtain the SPARQL queries for textual answers. Finally, we design a module to present the answers visually. A user study demonstrates that these visual and textual answers are useful, credible and and transparent.

著者
Sicheng Song
East China Normal University, Shanghai, China
Juntong Chen
East China Normal University, Shanghai, Shanghai, China
Chenhui Li
East China Normal University, Shanghai, China
Changbo Wang
East China Normal University, Shanghai, China
論文URL

https://doi.org/10.1145/3544548.3581067

動画
Causalvis: Visualizations for Causal Inference
要旨

Causal inference is a statistical paradigm for quantifying causal effects using observational data. It is a complex process, requiring multiple steps, iterations, and collaborations with domain experts. Analysts often rely on visualizations to evaluate the accuracy of each step. However, existing visualization toolkits are not designed to support the entire causal inference process within computational environments familiar to analysts. In this paper, we address this gap with Causalvis, a Python visualization package for causal inference. Working closely with causal inference experts, we adopted an iterative design process to develop four interactive visualization modules to support causal inference analysis tasks. The modules are then presented back to the experts for feedback and evaluation. We found that Causalvis effectively supported the iterative causal inference process. We discuss the implications of our findings for designing visualizations for causal inference, particularly for tasks of communication and collaboration.

著者
Grace Guo
Georgia Institute of Technology, Atlanta, Georgia, United States
Ehud Karavani
IBM Research, Haifa, Israel
Alex Endert
Georgia Institute of Technology, Atlanta, Georgia, United States
Bum Chul Kwon
IBM Research, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3581236

動画
CrowdIDEA: Blending Crowd Intelligence and Data Analytics to Empower Causal Reasoning
要旨

Causal reasoning is crucial for people to understand data, make decisions, or take action. However, individuals often have blind spots and overlook alternative hypotheses, and using only data is insufficient for causal reasoning. We designed and implemented CrowdIDEA, a novel tool consisting of a three-panel integration incorporating the crowd's beliefs (Crowd Panel with two designs), data analytics (Data Panel), and user's causal diagram (Diagram Panel) to stimulate causal reasoning. Through an experiment with 54 participants, we showed the significant effects of the Crowd Panel designs on the outcomes of causal reasoning, such as an increased number of causal beliefs generated. Participants also devised new strategies for bootstrapping, strengthening, deepening, and explaining their causal beliefs, as well as taking advantage of the unique characteristics of both qualitative and quantitative data sources to reduce potential biases in reasoning. Our work makes theoretical and design implications for exploratory causal reasoning.

著者
Chi-Hsien Yen
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
Haocong Cheng
University of Illinois Urbana-Champaign, Champaign, Illinois, United States
Yilin Xia
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
Yun Huang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
論文URL

https://doi.org/10.1145/3544548.3581021

動画
MetaExplorer: Facilitating Reasoning with Epistemic Uncertainty in Meta-analysis
要旨

Scientists often use meta-analysis to characterize the impact of an intervention on some outcome of interest across a body of literature. However, threats to the utility and validity of meta-analytic estimates arise when scientists average over potentially important variations in context like different research designs. Uncertainty about quality and commensurability of evidence casts doubt on results from meta-analysis, yet existing software tools for meta-analysis do not provide an explicit software representation of these concerns. We present MetaExplorer, a prototype system for meta-analysis that we developed using iterative design with meta-analysis experts to provide a guided process for eliciting assessments of uncertainty and reasoning about how to incorporate them during statistical inference. Our qualitative evaluation of MetaExplorer with experienced meta-analysts shows that imposing a structured workflow both elevates the perceived importance of epistemic concerns and presents opportunities for tools to engage users in dialogue around goals and standards for evidence aggregation.

著者
Alex Kale
University of Chicago, Chicago, Illinois, United States
Sarah Lee
Stottler Henke Associates, Inc., Seattle, Washington, United States
Terrance Goan
Stottler Henke Associates, Inc., Seattle, Washington, United States
Elizabeth Tipton
Northwestern University, Evanston, Illinois, United States
Jessica Hullman
Northwestern University, Evanston, Illinois, United States
論文URL

https://doi.org/10.1145/3544548.3580869

動画
Visual Belief Elicitation Reduces the Incidence of False Discovery
要旨

Visualization supports exploratory data analysis (EDA), but EDA frequently presents spurious charts, which can mislead people into drawing unwarranted conclusions. We investigate interventions to prevent false discovery from visualized data. We evaluate whether eliciting analyst beliefs helps guard against the over-interpretation of noisy visualizations. In two experiments, we exposed participants to both spurious and 'true' scatterplots, and assessed their ability to infer data-generating models that underlie those samples. Participants who underwent prior belief elicitation made 21% more correct inferences along with 12% fewer false discoveries. This benefit was observed across a variety of sample characteristics, suggesting broad utility to the intervention. However, additional interventions to highlight counterevidence and sample uncertainty did not provide a significant advantage. Our findings suggest that lightweight, belief-driven interactions can yield a reliable, if moderate, reduction in false discovery. This work also suggests future directions to improve visual inference and reduce bias. The data and materials for this paper are available at https://osf.io/52u6v/

受賞
Honorable Mention
著者
Ratanond Koonchanok
Indiana University–Purdue University Indianapolis, Indianapolis, Indiana, United States
Gauri Yatindra Tawde
Indiana University Purdue University Indianapolis, Indianapolis, Indiana, United States
Gokul Ragunandhan Narayanasamy
Indiana University–Purdue University Indianapolis, Indianapolis, Indiana, United States
Shalmali Walimbe
Indiana University, Indianapolis, Indianapolis, Indiana, United States
Khairi Reda
Indiana University-Purdue University Indianapolis, Indianapolis, Indiana, United States
論文URL

https://doi.org/10.1145/3544548.3580808

動画