How Do Analysts Understand and Verify AI-Assisted Data Analyses?

要旨

Data analysis is challenging as it requires synthesizing domain knowledge, statistical expertise, and programming skills. Assistants powered by large language models (LLMs), such as ChatGPT, can assist analysts by translating natural language instructions into code. However, AI-assistant responses and analysis code can be misaligned with the analyst's intent or be seemingly correct but lead to incorrect conclusions. Therefore, validating AI assistance is crucial and challenging. Here, we explore how analysts understand and verify the correctness of AI-generated analyses. To observe analysts in diverse verification approaches, we develop a design probe equipped with natural language explanations, code, visualizations, and interactive data tables with common data operations. Through a qualitative user study (n=22) using this probe, we uncover common behaviors within verification workflows and how analysts' programming, analysis, and tool backgrounds reflect these behaviors. Additionally, we provide recommendations for analysts and highlight opportunities for designers to improve future AI-assistant experiences.

著者
Ken Gu
Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States
Ruoxi Shang
University of Washington, Seattle, Washington, United States
Tim Althoff
Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States
Chenglong Wang
Microsoft Research, Redmond, Washington, United States
Steven M.. Drucker
Microsoft Research, Redmond, Washington, United States
論文URL

doi.org/10.1145/3613904.3642497

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Participatory AI

315
5 件の発表
2024-05-15 20:00:00
2024-05-15 21:20:00