Data Visualization Designs and Tools

会議の名前
CHI 2026
CrossLit: Connecting Visual and Textual Sensemaking for Literature Review
要旨

Conducting literature reviews is cognitively demanding, requiring researchers to navigate large volumes of work while constructing coherent narratives that position their contributions. The process unfolds through iterative stages of sensemaking, each demanding different support. Existing tools emphasize either visual interfaces that provide macroscopic overviews or textual interfaces that support thematic organization and narrative construction. However, keeping modalities separate forces researchers to switch between tools, disrupting workflow continuity. We present CrossLit, a system that integrates and synchronizes visual and textual interfaces to support the entire process from discovering papers to composing coherent narratives. CrossLit allows researchers to group and annotate papers visually while generating aligned textual structures, and to edit text that automatically updates visual representations. We find that CrossLit helps users develop and refine conceptual structures and build narratives iteratively through seamless cross-modal transitions. We conclude by discussing design implications for synchronizing visual and textual interfaces for sensemaking support.

著者
Kiroong Choe
Seoul National University, Seoul, Korea, Republic of
Eunhye Kim
KAIST, Daejeon, Select, Korea, Republic of
Min Hyeong Kim
Seoul National University, Seoul, Korea, Republic of
Suyeon Hwang
Seoul National University, Seoul, Korea, Republic of
Sangwon Park
Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of
Nam Wook Kim
Boston College, Chestnut Hill, Massachusetts, United States
Jinwook Seo
Seoul National University, Seoul, Korea, Republic of
NetworkCanvas: Supporting Progressive Network Visualization Exploration via Adaptive Recommendations
要旨

Network visualization has become essential for understanding complex relationships across domains, yet network complexity creates an overwhelming exploration space where users frequently miss critical patterns. Existing tools often require predetermined analysis goals and manual workflow construction, limiting accessibility for non-experts. We present NetworkCanvas, a progressive network visualization system that guides users through personalized exploration via adaptive recommendations. Our approach combines a learning mechanism that adapts to user feedback, an analytic state graph preserving exploration provenance with branching paths, and a context-aware feedback interpreter that suggests analytical continuations based on selection patterns. Controlled studies demonstrate that NetworkCanvas users identified more noteworthy observations, reported higher confidence, and exhibited more systematic exploration compared to a baseline without recommendations. These results demonstrate that recommendation-guided exploration improves outcomes over unguided manual analysis; however, because our baseline lacked recommendation functionality entirely, the specific contribution of adaptive personalization versus static guidance remains an open question. Qualitative findings suggest that recommendations reduce analysis paralysis and support systematic exploration.

著者
Wenchao Li
HUAWEI TECHNOLOGIES CO., LTD., Shenzhen, China
Yuewen Gao
Nanjing University, Nanjing, China
Yu He
Nanjing University, Nanjing, China
Cong Zhu
Huazhong University of Science and Technology, Wuhan, China
Ke Xu
Nanjing University, Nanjing, China
DensityBars: A Space-Efficient Visualization for Event Temporal Distribution
要旨

Event temporal distribution analysis aims to capture both global (e.g., rises and peaks) and local patterns (e.g., frequent occurrences and sudden absences). Traditional charts typically rely on adjusting binning granularities to reveal such patterns. However, this strategy forces a trade-off between global clarity and local detail and may require considerably more screen space as the number of bins increases, which limits its applicability in space-constrained visual interface design. In this paper, we propose DensityBars, a space-efficient visualization that embeds fine-grained density heatmaps of event occurrences into the coarse-grained bar chart to convey both global and local patterns simultaneously. Two real-world use cases and two formal user studies demonstrate its effectiveness and usability. Insights from studies provide valuable implications for the visual design of temporal distribution visualizations.

著者
Mingwei Lin
South China University of Technology, Guangzhou, Guangdong, China
Qin Huang
South China University of Technology, Guangzhou, Guangdong, China
Zikun Deng
South China University of Technology, Guangzhou, Guangdong, China
Tobias Schreck
Graz University of Technology, Graz, Austria
Yi Cai
South China University of Technology, Guangzhou, China
Visualizing Tree-of-analysis: Facilitating Conversational Visual Analytics for Novices
要旨

Conversational visual analytics (CVA) make data exploration accessible to novices but often leave users disoriented during multi-turn conversations. Previous approaches provide data-centric recommendations, but fail to help users regain orientations. To bridge this gap, we conducted a formative study (N=12) revealing that novices are insensitive to analytical cues and rely on vague queries, leading to disorientation and task failures. In contrast, experts are sensitive to two types of analytical cues and use seven types of queries to organize workflows. Based on these findings, we propose ToA, a novel approach that structures the CVA process as an interactive analysis tree. Moreover, we visualize this tree, with AI outputs as nodes (containing two cue types) and user queries as edges (categorized by seven query types), to provide novices with an overview of their analysis journey. We evaluated ToA through user studies (N=12) and expert interviews (N=3). The results suggest that ToA eliminates task failure and increases per-turn insights (+58.3%), despite longer per-turn thinking time (+17.7%). Expert interviews further confirm its potential to democratize visual analytics.

著者
Feiyuan Qu
Zhejiang University, Hangzhou, Zhejiang, China
Tan Tang
Zhejiang University, Hangzhou, China
Zeyang Fu
Zhejiang University, Hangzhou, China
Yan Chen
Zhejiang University, Hangzhou, Zhejiang, China
Hanze Jia
Zhejiang University, Hangzhou, Zhejiang, China
Junming Gao
Laboratory of Art and Archaeology Image, Zhejiang University, Hangzhou, Zhejiang, China
Songela Nurdawuliet
Laboratory of Art and Archaeology Image, Zhejiang University, Hangzhou, Zhejiang, China
Yingcai Wu
Zhejiang University, Hangzhou, Zhejiang, China
Revealing the Gap: Visual Comparison of Large-Scale Datasets via Multi-Scale Density Difference Map
要旨

Visual comparison of high-dimensional machine learning datasets helps practitioners identify gaps in data coverage, diagnose distribution shifts, and understand their potential influence on downstream tasks such as classification and object detection. However, the commonly used density map often blurs details and is computationally expensive. We present DiffGrid, a grid-based tool for comparing differences in large datasets. A regularized, grid-based density difference visualization method is developed to enable multi-level analysis of the differences. Interactive zooming and image labels are provided for efficiently exploring differences from overview to detail. We demonstrate the practical value of DiffGrid with two case studies, comparing coresets with full datasets and comparing synthetic infographics with real ones, and validate its effectiveness and usefulness with a quantitative experiment and a user study.

著者
Xinyuan Guo
Tsinghua University, Beijing, China
Xu Zhu
Tsinghua University, Beijing, China
Yilin Ye
Tsinghua University, Beijing, China
Shixia Liu
Tsinghua University, Beijing, China
Does Background Music Matter in Data Videos? A Study of Music's Impact on Persuasion, Engagement, and Recall
要旨

Data videos combine visualization, animation, narration, and often background music to tell stories with data. While music is widely believed to enhance emotion and persuasion, its impact in data videos remains unexplored. We conducted a preregistered between-subjects experiment comparing six widely-viewed data videos with or without background music. Using Bayesian modeling and thematic analysis, we did not observe consistent measurable effects of background music on persuasion, engagement, or information recall. Qualitative responses revealed a more nuanced picture: some participants described the music as distracting or mismatched, while others reported that it enhanced enjoyment, supported focus, or strengthened emotional resonance when well aligned with the video's tone. These findings suggest that the influence of background music in data videos is highly context-dependent, shaped by genre, familiarity, and its alignment with visual–narrative structure. We discuss possible reasons for the limited measurable effects observed in real-world videos and outline opportunities for future work on purpose-designed, incidental, or adaptive music for data-driven storytelling.

著者
Hessam Djavaherpour
Independent Researcher, Philadelphia, Pennsylvania, United States
Leni Yang
Inria, CNRS, University of Bordeaux, Bordeaux, France
Yvonne Jansen
CNRS, Inria, Univ. Bordeaux, LaBRI, Bordeaux, France
Pierre Dragicevic
Inria, CNRS, Univ. Bordeaux, Bordeaux, France
Narges Mahyar
City St George’s, University of London, London, United Kingdom
Mahmood Jasim
Louisiana State University, Baton Rouge, Louisiana, United States
Studying the Separability of Visual Channel Pairs in Symbol Maps
要旨

Visualizations often encode multivariate data by mapping attributes to distinct visual channels such as color, size, or shape. The effectiveness of these encodings depends on separability—the extent to which channels can be perceived independently. Yet systematic evidence for separability, especially in map-based contexts, is lacking. We present a crowdsourced experiment that evaluates the separability of four channel pairs—color (ordered) × shape, color (ordered) × size, size × shape, and size × orientation—in the context of bivariate symbol maps. Both accuracy and speed analyses show that color × shape is the most separable and size × orientation the least separable, while size × color and size × shape do not differ. Separability also proved asymmetric—performance depended on which channel encoded the task-relevant variable, with color and shape outperforming size, and square shape especially difficult to discriminate. Our findings advance the empirical understanding of visual separability, with implications for multivariate map design.

著者
Poorna Talkad Sukumar
New York University, Brooklyn, New York, United States
Maurizio Porfiri
New York University, Brooklyn, New York, United States
Oded Nov
New York University, New York, New York, United States