"Hey Dashboard!": Supporting Voice, Text, and Pointing Modalities in Dashboard Onboarding using Large Language Models

要旨

Visualization dashboards are regularly used for data exploration and analysis, but their complex interactions and interlinked views often require time-consuming onboarding sessions from dashboard authors. Preparing these onboarding materials is labor-intensive and requires manual updates when dashboards change. Recent advances in multimodal interaction powered by large language models (LLMs) provide ways to support self-guided onboarding. We present DIANA (Dashboard Interactive Assistant for Navigation and Analysis), a multimodal dashboard assistant that helps users for navigation and guided analysis through chat, audio, and mouse-based interactions. Users can choose any interaction modality or a combination of them to onboard themselves on the dashboard. Each modality highlights relevant dashboard features to support user orientation. Unlike typical LLM systems that rely solely on text-based chat, DIANA combines multiple modalities to provide explanations directly in the dashboard interface. We conducted a comparative qualitative user study to understand the use of different modalities for different types of onboarding tasks and their complexities.

著者
Vaishali Dhanoa
Aarhus University, Aarhus, Denmark
Gabriela Molina León
Aarhus University, Aarhus, Denmark
Eve Hoggan
Computer Science, Aarhus University, Aarhus, Denmark
Eduard Gröller
Institute of Visual Computing & Human-Centered Technology, Vienna, Austria
Marc Streit
Johannes Kepler University Linz, Linz, Austria
Niklas Elmqvist
Aarhus University, Aarhus, Denmark

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI & Data Visualization

M2 - Room M211/212
6 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00