Data Visualization and Literacy

会議の名前
CHI 2024
Data Storytelling in Data Visualisation: Does it Enhance the Efficiency and Effectiveness of Information Retrieval and Insights Comprehension?
要旨

Data storytelling (DS) is rapidly gaining attention as an approach that integrates data, visuals, and narratives to create data stories that can help a particular audience to comprehend the key messages underscored by the data with enhanced efficiency and effectiveness. It has been posited that DS can be especially advantageous for audiences with limited visualisation literacy, by presenting the data clearly and concisely. However, empirical studies confirming whether data stories indeed provide these benefits over conventional data visualisations are scarce. To bridge this gap, we conducted a study with 103 participants to determine whether DS indeed improve both efficiency and effectiveness in tasks related to information retrieval and insights comprehension. Our findings suggest that data stories do improve the efficiency of comprehension tasks, as well as the effectiveness of comprehension tasks that involve a single insight, compared with conventional visualisations. Interestingly, these benefits were not associated with participants' visualisation literacy.

著者
Hongbo Shao
Monash University, Melbourne, VIC, Australia
Roberto Martinez-Maldonado
Monash University, Melbourne, Victoria, Australia
Vanessa Echeverria
Monash University, Melbourne, VIC, Australia
Lixiang Yan
Monash University, Melbourne, VIC, Australia
Dragan Gasevic
Monash University, Clayton, Victoria, Australia
論文URL

doi.org/10.1145/3613904.3643022

動画
Make Interaction Situated: Designing User Acceptable Interaction for Situated Visualization in Public Environments
要旨

Situated visualization blends data into the real world to fulfill individuals’ contextual information needs. However, interacting with situated visualization in public environments faces challenges posed by users’ acceptance and contextual constraints. To explore appropriate interaction design, we first conduct a formative study to identify users’ needs for data and interaction. Informed by the findings, we summarize appropriate interaction modalities with eye-based, hand-based and spatially-aware object interaction for situated visualization in public environments. Then, through an iterative design process with six users, we explore and implement interactive techniques for activating and analyzing with situated visualization. To assess the effectiveness and acceptance of these interactions, we integrate them into an AR prototype and conduct a within-subjects study in public scenarios using conventional hand-only interactions as the baseline. The results show that participants preferred our prototype over the baseline, attributing their preference to the interactions being more acceptable, flexible, and practical in public.

著者
Qian Zhu
The Hong Kong University of Science and Technology, Hong Kong, China
Zhuo Wang
Xi’an Jiaotong-liverpool University, Su Zhuo, Jiang Su, China
Wei Zeng
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Wai Tong
The Hong Kong University of Science and Technology, Hong Kong, China
Weiyue Lin
Peking University, Beijing, China
Xiaojuan Ma
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
論文URL

doi.org/10.1145/3613904.3642049

動画
A Human Information Processing Theory of the Interpretation of Visualizations: Demonstrating Its Utility
要旨

Providing an approach to model the memory structures that humans build as they use visualizations could be useful for researchers, designers and educators in the field of information visualization. Cheng and colleagues formulated Representation Interpretive Structure Theory (RIST) for that purpose. RIST adopts a human information processing perspective in order to address the immediate, short timescale, cognitive load likely to be experienced by visualization users. RIST is operationalized in a graphical modeling notation and browser-based editor. This paper demonstrates the utility of RIST by showing that (a): RIST models are compatible with established empirical and computational cognitive findings about differences in human performance on alternative representations; (b) they can encompass existing explanations from the literature; and, (c) they provide new explanations about causes of those performance differences.

著者
Peter Cheng
University of Sussex, Brighton, United Kingdom
Grecia Garcia Garcia
University of Sussex, Brighton, United Kingdom
Daniel Raggi
University of Cambridge, Cambridge, United Kingdom
Mateja Jamnik
University of Cambridge, Cambridge, United Kingdom
論文URL

doi.org/10.1145/3613904.3642276

動画
VAID: Indexing View Designs in Visual Analytics System
要旨

Visual analytics (VA) systems have been widely used in various application domains. However, VA systems are complex in design, which imposes a serious problem: although the academic community constantly designs and implements new designs, the designs are difficult to query, understand, and refer to by subsequent designers. To mark a major step forward in tackling this problem, we index VA designs in an expressive and accessible way, transforming the designs into a structured format. We first conducted a workshop study with VA designers to learn user requirements for understanding and retrieving professional designs in VA systems. Thereafter, we came up with an index structure VAID to describe advanced and composited visualization designs with comprehensive labels about their analytical tasks and visual designs. The usefulness of VAID was validated through user studies. Our work opens new perspectives for enhancing the accessibility and reusability of professional visualization designs.

著者
Lu Ying
Zhejiang University, Hangzhou, Zhejiang, China
Aoyu Wu
Harvard University, Cambridge, Massachusetts, United States
Haotian Li
The Hong Kong University of Science and Technology, Hong Kong, China
Zikun Deng
South China University of Technology, Guangzhou, Guangdong, China
Ji Lan
AIFT, Hong Kong, Hong Kong
Jiang Wu
Zhejiang University, Hangzhou, Zhejiang, China
Yong Wang
Singapore Management University, Singapore, Singapore, Singapore
Huamin Qu
The Hong Kong University of Science and Technology, Hong Kong, China
Dazhen Deng
Zhejiang University, Ningbo, Zhejiang, China
Yingcai Wu
Zhejiang University, Hangzhou, Zhejiang, China
論文URL

doi.org/10.1145/3613904.3642237

動画
Reading Between the Pixels: Investigating the Barriers to Visualization Literacy
要旨

In our current visual-centric digital age, the capability to interpret, understand, and produce visual representations of data —termed visualization literacy— is paramount. However, not everyone is adept at navigating this visual terrain. This paper explores the barriers that individuals who misread a visualization encounter, aiming to understand their specific mental gaps. Utilizing a mixed-method approach, we administered the Visualization Literacy Assessment Test (VLAT) to a group of 120 participants drawn from diverse demographic backgrounds, which provided us with 1774 task completions. We augmented the standard VLAT test to capture quantitative and qualitative data on participants' errors. We collected participant sketches and open-ended text about their analysis approach, providing insight into users' mental models and rationale. Our findings reveal that individuals who incorrectly answer visualization literacy questions often misread visual channels, confound chart labels with data values, or struggle to translate data-driven questions into visual queries. Recognizing and bridging visualization literacy gaps not only ensures inclusivity but also enhances the overall effectiveness of visual communication in our society.

著者
Carolina Nobre
University of Toronto, Toronto, Ontario, Canada
Kehang Zhu
Harvard, Cambridge, Massachusetts, United States
Eric Mörth
Harvard Medical School, Boston, Massachusetts, United States
Hanspeter Pfister
Harvard University, Cambridge, Massachusetts, United States
Johanna Beyer
Harvard University, Cambridge, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642760

動画