Visualization

会議の名前
CHI 2025
Seeing Eye to AI? Applying Deep-Feature-Based Similarity Metrics to Information Visualization
要旨

Judging the similarity of visualizations is crucial to various applications, such as visualization-based search and visualization recommendation systems. Recent studies show deep-feature-based similarity metrics correlate well with perceptual judgments of image similarity and serve as effective loss functions for tasks like image super-resolution and style transfer. We explore the application of such metrics to judgments of visualization similarity. We extend a similarity metric using five ML architectures and three pre-trained weight sets. We replicate results from previous crowdsourced studies on scatterplot and visual channel similarity perception. Notably, our metric using pre-trained ImageNet weights outperformed gradient-descent tuned MS-SSIM, a multi-scale similarity metric based on luminance, contrast, and structure. Our work contributes to understanding how deep-feature-based metrics can enhance similarity assessments in visualization, potentially improving visual analysis tools and techniques. Supplementary materials are available at https://osf.io/dj2ms/.

著者
Sheng Long
Northwestern University, Evanston, Illinois, United States
Angelos Chatzimparmpas
Utrecht University, Utrecht, Netherlands
Emma Alexander
Northwestern University, Evanston, Illinois, United States
Matthew Kay
Northwestern University, Chicago, Illinois, United States
Jessica Hullman
Northwestern University, Evanston, Illinois, United States
DOI

10.1145/3706598.3713955

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713955

How Visualization Designers Perceive and Use Inspiration
要旨

Inspiration plays an important role in design, yet its specific impact on data visualization design practice remains underexplored. This study investigates how professional visualization designers perceive and use inspiration in their practice. Through semi-structured interviews, we examine their sources of inspiration, the value they place on them, and how they navigate the balance between inspiration and imitation. Our findings reveal that designers draw from a diverse array of sources, including existing visualizations, real-world phenomena, and personal experiences. Participants describe a mix of active and passive inspiration practices, often iterating on sources to create original designs. This research offers insights into the role of inspiration in visualization practice, the need to expand visualization design theory, and the implications for the development of visualization tools that support inspiration and for training future visualization designers.

著者
Ali Baigelenov
Purdue University, West Lafayette, Indiana, United States
Prakash Chandra Shukla
Purdue University, West Lafayette, Indiana, United States
Paul C. Parsons
Purdue University, West Lafayette, Indiana, United States
DOI

10.1145/3706598.3714191

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714191

動画
Chartist: Task-driven Eye Movement Control for Chart Reading
要旨

To design data visualizations that are easy to comprehend, we need to understand how people with different interests read them. Computational models of predicting scanpaths on charts could complement empirical studies by offering estimates of user performance inexpensively; however, previous models have been limited to gaze patterns and overlooked the effects of tasks. Here, we contribute Chartist, a computational model that simulates how users move their eyes to extract information from the chart in order to perform analysis tasks, including value retrieval, filtering, and finding extremes. The novel contribution lies in a two-level hierarchical control architecture. At the high level, the model uses LLMs to comprehend the information gained so far and applies this representation to select a goal for the lower-level controllers, which, in turn, move the eyes in accordance with a sampling policy learned via reinforcement learning. The model is capable of predicting human-like task-driven scanpaths across various tasks. It can be applied in fields such as explainable AI, visualization design evaluation, and optimization. While it displays limitations in terms of generalizability and accuracy, it takes modeling in a promising direction, toward understanding human behaviors in interacting with charts.

著者
Danqing Shi
Aalto University, Helsinki, Finland
Yao Wang
University of Stuttgart, Stuttgart, Germany
Yunpeng Bai
National University of Singapore, Singapore, Singapore
Andreas Bulling
University of Stuttgart, Stuttgart, Germany
Antti Oulasvirta
Aalto University, Helsinki, Finland
DOI

10.1145/3706598.3713128

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713128

動画
Lost in Magnitudes: Exploring Visualization Designs for Large Value Ranges
要旨

We explore the design of visualizations for values spanning multiple orders of magnitude; we call them Orders of Magnitude Values (OMVs). Visualization researchers have shown that separating OMVs into two components, the mantissa and the exponent, and encoding them separately overcomes limitations of linear and logarithmic scales. However, only a small number of such visualizations have been tested, and the design guidelines for visualizing the mantissa and exponent separately remain under-explored. To initiate this exploration, better understand the factors influencing the effectiveness of these visualizations, and create guidelines, we adopt a multi-stage workflow. We introduce a design space for visualizing mantissa and exponent, systematically generating and qualitatively evaluating all possible visualizations within it. From this evaluation, we derive guidelines. We select two visualizations that align with our guidelines and test them using a crowdsourcing experiment, showing they facilitate quantitative comparisons and increase confidence in interpretation compared to the state-of-the-art.

受賞
Best Paper
著者
Katerina Batziakoudi
Berger-Levrault, Boulogne-Billancourt, France
Florent Cabric
Aviz, Inria, Saclay, France
Stéphanie Rey
Berger-Levrault, Toulouse, France
Jean-Daniel Fekete
Inria, Saclay, France
DOI

10.1145/3706598.3713487

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713487

動画
Libra: An Interaction Model for Data Visualization
要旨

While existing visualization libraries enable the reuse, extension, and combination of static visualizations, achieving the same for interactions remains nearly impossible. We contribute an interaction model and its implementation to achieve this goal. Our model enables the creation of interactions that support direct manipulation, enforce software modularity by clearly separating visualizations from interactions, and ensure compatibility with existing visualization systems. Interaction management is achieved through an instrument that receives events from the view, dispatches these events to graphical layers containing objects, and then triggers actions. We present a JavaScript prototype implementation of our model called Libra.js, enabling the specification of interactions for visualizations created by different libraries. We demonstrate the effectiveness of Libra by describing and generating a wide range of existing interaction techniques. We evaluate Libra.js through diverse examples, a metric-based notation comparison, and a performance benchmark analysis.

受賞
Honorable Mention
著者
Yue Zhao
School of Computer Science and Technology, Qingdao, Shandong, China
Yunhai Wang
Renmin University of China, Beijing, China
Xu Luo
Shandong University, Qingdao, China
Yanyan Wang
Ant Group, Hangzhou, China
Jean-Daniel Fekete
Inria, Saclay, France
DOI

10.1145/3706598.3713769

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713769

動画
AVEC: An Assessment of Visual Encoding Ability in Visualization Construction
要旨

Visualization literacy is the ability to both interpret and construct visualizations. Yet existing assessments focus solely on visualization interpretation. A lack of construction-related measurements hinders efforts in understanding and improving literacy in visualizations. We design and develop AVEC, an assessment of a person's visual encoding ability—a core component of the larger process of visualization construction—by: (1) creating an initial item bank using a design space of visualization tasks and chart types, (2) designing an assessment tool to support the combinatorial nature of selecting appropriate visual encodings, (3) building an autograder from expert scores of answers to our items, and (4) refining and validating the item bank and autograder through an analysis of test tryout data with 95 participants and feedback from the expert panel. We discuss recommendations for using AVEC, potential alternative scoring strategies, and the challenges in assessing higher-level visualization skills using constructed-response tests. Supplemental materials are available at: https://osf.io/hg7kx/.

著者
Lily W.. Ge
Northwestern University, Evanston, Illinois, United States
Yuan Cui
Northwestern University, Evanston, Illinois, United States
Matthew Kay
Northwestern University, Chicago, Illinois, United States
DOI

10.1145/3706598.3713364

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713364

動画
Seeing Through the Overlap: The Impact of Color and Opacity on Depth Order Perception in Visualization
要旨

Semi-transparent visualizations are commonly used to reveal information in overlapped regions by applying colors and opacity. While a few studies made recommendations on how to choose colors and opacity levels to maintain depth perception, they often conflict and overlook the interaction effect between these factors. In this paper, we systematically explore the impact of color and opacity on depth order perception across eight colors, three opacity levels, and various layer orders and arrangements. Our inferential analysis shows that both color hue and opacity significantly influence depth order perception, with the effectiveness depending on their interaction. We also derived 12 features for predictive analysis, achieving the best mean accuracy of 80.72% and mean F1 score of 87.75%, with opacity assigned to the front layer as the top feature for most models. Finally, we provide a small design tool and four guidelines to better align the design rules of colors and opacity in semi-transparent visualizations.

著者
Zhiyuan Meng
Shandong University, Qingdao, Shandong, China
Yunpeng Yang
Shandong University, Qingdao, Shandong, China
Qiong Zeng
Shandong University, Qingdao, Shandong , China
Kecheng Lu
Renmin University of China, Beijing, China
Lin Lu
Shandong University, Qingdao, Shandong, China
Changhe Tu
Shandong Univ., Qingdao, China
Fumeng Yang
University of Maryland College Park, College Park, Maryland, United States
Yunhai Wang
Renmin University of China, Beijing, China
DOI

10.1145/3706598.3714070

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714070

動画