Talk visually to me

Paper session

会議の名前
CHI 2020
Answering Questions about Charts and Generating Visual Explanations
要旨

People often use charts to analyze data, answer questions and explain their answers to others. In a formative study, we find that such human-generated questions and explanations commonly refer to visual features of charts. Based on this study, we developed an automatic chart question answering pipeline that generates visual explanations describing how the answer was obtained. Our pipeline first extracts the data and visual encodings from an input Vega-Lite chart. Then, given a natural language question about the chart, it transforms references to visual attributes into references to the data. It next applies a state-of-the-art machine learning algorithm to answer the transformed question. Finally, it uses a template-based approach to explain in natural language how the answer is determined from the chart's visual features. A user study finds that our pipeline-generated visual explanations significantly outperform in transparency and are comparable in usefulness and trust to human-generated explanations.

キーワード
Question answering
Visualization
Explainable AI
著者
Dae Hyun Kim
Stanford University, Stanford, CA, USA
Enamul Hoque
York University, Toronto, ON, Canada
Maneesh Agrawala
Stanford University, Stanford, CA, USA
DOI

10.1145/3313831.3376467

論文URL

https://doi.org/10.1145/3313831.3376467

動画
Automatic Annotation Synchronizing with Textual Description for Visualization
要旨

In this paper, we propose a technique for automatically annotating visualizations according to the textual description. In our approach, visual elements in the target visualization, along with their visual properties, are identified and extracted with a Mask R-CNN model. Meanwhile, the description is parsed to generate visual search requests. Based on the identification results and search requests, each descriptive sentence is displayed beside the described focal areas as annotations. Different sentences are presented in various scenes of the generated animation to promote a vivid step-by-step presentation. With a user-customized style, the animation can guide the audience's attention via proper highlighting such as emphasizing specific features or isolating part of the data. We demonstrate the utility and usability of our method through a user study with use cases.

キーワード
Visualization
Annotation
Natural Language Interface
Machine Learning
著者
Chufan Lai
Peking University, Beijing, China
Zhixian Lin
Peking University, Beijing, China
Ruike Jiang
Peking University, Beijing, China
Yun Han
Peking University, Beijing, China
Can Liu
Peking University, Beijing, China
Xiaoru Yuan
Peking University, Beijing, China
DOI

10.1145/3313831.3376443

論文URL

https://doi.org/10.1145/3313831.3376443

動画
Exploring Visual Information Flows in Infographics
要旨

Infographics are engaging visual representations that tell an informative story using a fusion of data and graphical elements. The large variety of infographic design poses a challenge for their high-level analysis. We use the concept of Visual Information Flow (VIF), which is the underlying semantic structure that links graphical elements to convey the information and story to the user. To explore VIF, we collected a repository of over 13K infographics. We use a deep neural network to identify visual elements related to information, agnostic to their various artistic appearances. We construct the VIF by automatically chaining these visual elements together based on Gestalt principles. Using this analysis, we characterize the VIF design space by a taxonomy of 12 different design patterns. Exploring in a real-world infographic dataset, we discuss the design space and potentials of VIF in light of this taxonomy.

キーワード
Infographics
Visual Information Flow
Design Analysis
著者
Min Lu
Shenzhen University, Shenzhen, China
Chufeng Wang
Shenzhen University, Shenzhen, China
Joel Lanir
The University of Haifa, Haifa, Israel
Nanxuan Zhao
Harvard University & City University of Hong Kong, Hong Kong, China
Hanspeter Pfister
Harvard University, Cambridge, MA, USA
Daniel Cohen-Or
Shenzhen University, Shenzhen, China
Hui Huang
Shenzhen University, Shenzhen, China
DOI

10.1145/3313831.3376263

論文URL

https://doi.org/10.1145/3313831.3376263

動画
Interaction Techniques for Visual Exploration Using Embedded Word-Scale Visualizations
要旨

We describe a design space of view manipulation interactions for small data-driven contextual visualizations (word-scale visualizations). These interaction techniques support an active reading experience and engage readers through exploration of embedded visualizations whose placement and content connect them to specific terms in a document. A reader could, for example, use our proposed interaction techniques to explore word-scale visualizations of stock market trends for companies listed in a market overview article. When readers wish to engage more deeply with the data, they can collect, arrange, compare, and navigate the document using the embedded word-scale visualizations, permitting more visualization-centric analyses. We support our design space with a concrete implementation, illustrate it with examples from three application domains, and report results from two experiments. The experiments show how view manipulation interactions helped readers examine embedded visualizations more quickly and with less scrolling and yielded qualitative feedback on usability and future opportunities.

キーワード
Information visualization
Word-scale visualization
Interaction techniques
Text visualization
Glyphs
著者
Pascal Goffin
University of Utah, Salt Lake City, UT, USA
Tanja Blascheck
University of Stuttgart, Stuttgart, Germany
Petra Isenberg
Inria, Saclay, France
Wesley Willett
University of Calgary, Calgary, AB, Canada
DOI

10.1145/3313831.3376842

論文URL

https://doi.org/10.1145/3313831.3376842

動画
Teddy: A System for Interactive Review Analysis
要旨

Reviews are integral to e-commerce services and products. They contain a wealth of information about the opinions and experiences of users, which can help better understand consumer decisions and improve user experience with products and services. Today, data scientists analyze reviews by developing rules and models to extract, aggregate, and understand information embedded in the review text. However, working with thousands of reviews, which are typically noisy incomplete text, can be daunting without proper tools. Here we first contribute results from an interview study that we conducted with fifteen data scientists who work with review text, providing insights into their practices and challenges. Results suggest data scientists need interactive systems for many review analysis tasks. Towards a solution, we then introduce Teddy, an interactive system that enables data scientists to quickly obtain insights from reviews and improve their extraction and modeling pipelines.

キーワード
interactive systems
visualization
data science
contextual interviews
review analysis
text mining
sentiment analysis
schema generation
著者
Xiong Zhang
University of Rochester, Rochester, NY, USA
Jonathan Engel
Megagon Labs, Mountain View, CA, USA
Sara Evensen
Megagon Labs, Mountain View, CA, USA
Yuliang Li
Megagon Labs, Mountain View, CA, USA
Çağatay Demiralp
Megagon Labs, Mountain View, CA, USA
Wang-Chiew Tan
Megagon Labs, Mountain View, CA, USA
DOI

10.1145/3313831.3376235

論文URL

https://doi.org/10.1145/3313831.3376235

動画