Farsight: Fostering Responsible AI Awareness During AI Application Prototyping

要旨

Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://pair-code.github.io/farsight.

受賞
Honorable Mention
著者
Zijie J.. Wang
Georgia Tech, Atlanta, Georgia, United States
Chinmay Kulkarni
Emory University, Atlanta, Georgia, United States
Lauren Wilcox
Georgia Institute of Technology, Atlanta, Georgia, United States
Michael Terry
Google, Cambridge, Massachusetts, United States
Michael Madaio
Google Research, New York, New York, United States
論文URL

doi.org/10.1145/3613904.3642335

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: User Studies on Large Language Models

314
5 件の発表
2024-05-13 20:00:00
2024-05-13 21:20:00