Participatory AI

会議の名前
CHI 2024
How Do Analysts Understand and Verify AI-Assisted Data Analyses?
要旨

Data analysis is challenging as it requires synthesizing domain knowledge, statistical expertise, and programming skills. Assistants powered by large language models (LLMs), such as ChatGPT, can assist analysts by translating natural language instructions into code. However, AI-assistant responses and analysis code can be misaligned with the analyst's intent or be seemingly correct but lead to incorrect conclusions. Therefore, validating AI assistance is crucial and challenging. Here, we explore how analysts understand and verify the correctness of AI-generated analyses. To observe analysts in diverse verification approaches, we develop a design probe equipped with natural language explanations, code, visualizations, and interactive data tables with common data operations. Through a qualitative user study (n=22) using this probe, we uncover common behaviors within verification workflows and how analysts' programming, analysis, and tool backgrounds reflect these behaviors. Additionally, we provide recommendations for analysts and highlight opportunities for designers to improve future AI-assistant experiences.

著者
Ken Gu
Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States
Ruoxi Shang
University of Washington, Seattle, Washington, United States
Tim Althoff
Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States
Chenglong Wang
Microsoft Research, Redmond, Washington, United States
Steven M.. Drucker
Microsoft Research, Redmond, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642497

動画
From Fitting Participation to Forging Relationships: The Art of Participatory ML
要旨

Participatory machine learning (ML) encourages the inclusion of end users and people affected by ML systems in design and development processes. We interviewed 18 participation brokers—individuals who facilitate such inclusion and transform the products of participants' labour into inputs for an ML artefact or system—across a range of organisational settings and project locations. Our findings demonstrate the inherent challenges of integrating messy contextual information generated through participation with the structured data formats required by ML workflows and the uneven power dynamics in project contexts. We advocate for evolution in the role of brokers to more equitably balance value generated in Participatory ML projects for design and development teams with value created for participants. To move beyond 'fitting' participation to existing processes and empower participants to envision alternative futures through ML, brokers must become educators and advocates for end users, while attending to frustration and dissent from indirect stakeholders.

著者
Ned Cooper
Australian National University, Canberra, ACT, Australia
Alexandra C. Zafiroglu
Australian National University, Canberra, ACT, Australia
論文URL

https://doi.org/10.1145/3613904.3642775

動画
Explaining It Your Way - Findings from a Co-Creative Design Workshop on Designing XAI Applications with AI End-Users from the Public Sector
要旨

Human-Centered AI prioritizes end-users' needs like transparency and usability. This is vital for applications that affect people's everyday lives, such as social assessment tasks in the public sector. This paper discusses our pioneering effort to involve public sector AI users in XAI application design through a co-creative workshop with unemployment consultants from Estonia. The workshop's objectives were identifying user needs and creating novel XAI interfaces for the used AI system. As a result of our user-centered design approach, consultants were able to develop AI interface prototypes that would support them in creating success stories for their clients by getting detailed feedback and suggestions. We present a discussion on the value of co-creative design methods with end-users working in the public sector to improve AI application design and provide a summary of recommendations for practitioners and researchers working on AI systems in the public sector.

受賞
Honorable Mention
著者
Katharina Weitz
University of Augsburg, Augsburg, Germany
Ruben Schlagowski
University of Augsburg, Augsburg, Germany
Elisabeth André
University of Augsburg, Augsburg, Germany
Maris Männiste
University of Tartu, Tartu, Estonia
Ceenu George
TU Berlin, Berlin, Germany
論文URL

https://doi.org/10.1145/3613904.3642563

動画
Generative AI in the Wild: Prospects, Challenges, and Strategies
要旨

Propelled by their remarkable capabilities to generate novel and engaging content, Generative Artificial Intelligence (GenAI) technologies are disrupting traditional workflows in many industries. While prior research has examined GenAI from a techno-centric perspective, there is still a lack of understanding about how users perceive and utilize GenAI in real-world scenarios. To bridge this gap, we conducted semi-structured interviews with (N = 18) GenAI users in creative industries, investigating the human-GenAI co-creation process within a holistic LUA (Learning, Using and Assessing) framework. Our study uncovered an intriguingly complex landscape: Prospects -- GenAI greatly fosters the co-creation between human expertise and GenAI capabilities, profoundly transforming creative workflows; Challenges -- Meanwhile, users face substantial uncertainties and complexities arising from resource availability, tool usability, and regulatory compliance; Strategies -- In response, users actively devise various strategies to overcome many of such challenges. Our study reveals key implications for the design of future GenAI tools.

著者
Yuan Sun
University of Florida, Gainesville, Florida, United States
Eunchae Jang
Pennsylvania State University, University Park, Pennsylvania, United States
Fenglong Ma
The Pennsylvania State University, University Park, Pennsylvania, United States
Ting Wang
Stony Brook University, Stony Brook, New York, United States
論文URL

https://doi.org/10.1145/3613904.3642160

動画
The Situate AI Guidebook: Co-Designing a Toolkit to Support Multi-Stakeholder, Early-stage Deliberations Around Public Sector AI Proposals
要旨

Public sector agencies are rapidly deploying AI systems to augment or automate critical decisions in real-world contexts like child welfare, criminal justice, and public health. A growing body of work documents how these AI systems often fail to improve services in practice. These failures can often be traced to decisions made during the early stages of AI ideation and design, such as problem formulation. However, today, we lack systematic processes to support effective, early-stage decision-making about whether and under what conditions to move forward with a proposed AI project. To understand how to scaffold such processes in real-world settings, we worked with public sector agency leaders, AI developers, frontline workers, and community advocates across four public sector agencies and three community advocacy groups in the United States. Through an iterative co-design process, we created the Situate AI Guidebook: a structured process centered around a set of deliberation questions to scaffold conversations around (1) goals and intended use or a proposed AI system, (2) societal and legal considerations, (3) data and modeling constraints, and (4) organizational governance factors. We discuss how the guidebook's design is informed by participants’ challenges, needs, and desires for improved deliberation processes. We further elaborate on implications for designing responsible AI toolkits in collaboration with public sector agency stakeholders and opportunities for future work to expand upon the guidebook. This design approach can be more broadly adopted to support the co-creation of responsible AI toolkits that scaffold key decision-making processes surrounding the use of AI in the public sector and beyond.

著者
Anna Kawakami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Amanda Coston
Microsoft Research, Cambridge, Massachusetts, United States
Haiyi Zhu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hoda Heidari
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Kenneth Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3613904.3642849

動画