AI/ML & seeing through the black box

Paper session

会議の名前
CHI 2020
Monsters, Metaphors, and Machine Learning
要旨

Machine learning (ML) poses complex challenges for user experience (UX) designers. Typically unpredictable and opaque, it may produce unforeseen outcomes detrimental to particular groups or individuals, yet simultaneously promise amazing breakthroughs in areas as diverse as medical diagnosis and universal translation. This results in a polarized view of ML, which is often manifested through a technology-as-monster metaphor. In this paper, we acknowledge the power and potential of this metaphor by resurfacing historic complexities in human-monster relations. We (re)introduce these liminal and ambiguous creatures, and discuss their relation to ML. We offer a background to designers' use of metaphor, and show how the technology-as-monster metaphor can generatively probe and (re)frame the questions ML poses. We illustrate the effectiveness of this approach through a detailed discussion of an early-stage generative design workshop inquiring into ML approaches to supporting student mental health and well-being.

キーワード
Machine Learning
UX Design
Generative Metaphor
Monster Theory
著者
Graham Dove
New York University, New York, NY, USA
Anne-Laure Fayard
New York University, Brooklyn, NY, USA
DOI

10.1145/3313831.3376275

論文URL

https://doi.org/10.1145/3313831.3376275

Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design
要旨

Artificial Intelligence (AI) plays an increasingly important role in improving HCI and user experience. Yet many challenges persist in designing and innovating valuable human-AI interactions. For example, AI systems can make unpredictable errors, and these errors damage UX and even lead to undesired societal impact. However, HCI routinely grapples with complex technologies and mitigates their unintended consequences. What makes AI different? What makes human-AI interaction appear particularly difficult to design? This paper investigates these questions. We synthesize prior research, our own design and research experience, and our observations when teaching human-AI interaction. We identify two sources of AI's distinctive design challenges: 1) uncertainty surrounding AI's capabilities, 2) AI's output complexity, spanning from simple to adaptive complex. We identify four levels of AI systems. On each level, designers encounter a different subset of the design challenges. We demonstrate how these findings reveal new insights for designers, researchers, and design tool makers in productively addressing the challenges of human-AI interaction going forward.

受賞
Honorable Mention
キーワード
User experience
artificial intelligence
sketching
prototyping
著者
Qian Yang
Carnegie Mellon University, Pittsburgh, PA, USA
Aaron Steinfeld
Carnegie Mellon University, Pittsburgh, PA, USA
Carolyn Rosé
Carnegie Mellon University, Pittsburgh, PA, USA
John Zimmerman
Carnegie Mellon University, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376301

論文URL

https://doi.org/10.1145/3313831.3376301

Researching AI Legibility through Design
要旨

Everyday interactions with computers are increasingly likely to involve elements of Artificial Intelligence (AI). Encompassing a broad spectrum of technologies and applications, AI poses many challenges for HCI and design. One such challenge is the need to make AI's role in a given system legible to the user in a meaningful way. In this paper we employ a Research through Design (RtD) approach to explore how this might be achieved. Building on contemporary concerns and a thorough exploration of related research, our RtD process reflects on designing imagery intended to help increase AI legibility for users. The paper makes three contributions. First, we thoroughly explore prior research in order to critically unpack the AI legibility problem space. Second, we respond with design proposals whose aim is to enhance the legibility, to users, of systems using AI. Third, we explore the role of design-led enquiry as a tool for critically exploring the intersection between HCI and AI research.

キーワード
Artificial Intelligence
Machine Learning
Legibility
Human-Data Interaction
Research through Design
著者
Joseph Lindley
Lancaster University, Lancaster, Lancashire, United Kingdom
Haider Ali Akmal
Lancaster University, Lancaster, Lancashire, United Kingdom
Franziska Pillling
Lancaster University, Lancaster, Lancashire, United Kingdom
Paul Coulton
Lancaster University, Lancaster, Lancashire, United Kingdom
DOI

10.1145/3313831.3376792

論文URL

https://doi.org/10.1145/3313831.3376792

What is AI Literacy? Competencies and Design Considerations
要旨

Artificial intelligence (AI) is becoming increasingly integrated in user-facing technology, but public understanding of these technologies is often limited. There is a need for additional HCI research investigating a) what competencies users need in order to effectively interact with and critically evaluate AI and b) how to design learner-centered AI technologies that foster increased user understanding of AI. This paper takes a step towards realizing both of these goals by providing a concrete definition of AI literacy based on existing research. We synthesize a variety of interdisciplinary literature into a set of core competencies of AI literacy and suggest several design considerations to support AI developers and educators in creating learner-centered AI. These competencies and design considerations are organized in a conceptual framework thematically derived from the literature. This paper's contributions can be used to start a conversation about and guide future research on AI literacy within the HCI community.

受賞
Honorable Mention
キーワード
AI literacy
AI education
AI for K-12
artificial intelligence
machine learning
computing education
著者
Duri Long
Georgia Institute of Technology, Atlanta, GA, USA
Brian Magerko
Georgia Institute of Technology, Atlanta, GA, USA
DOI

10.1145/3313831.3376727

論文URL

https://doi.org/10.1145/3313831.3376727

Questioning the AI: Informing Design Practices for Explainable AI User Experiences
要旨

A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.

受賞
Honorable Mention
キーワード
Explainable AI
human-AI interaction
User experience
著者
Q. Vera Liao
IBM Research AI, Yorktown Heights, NY, USA
Daniel Gruen
IBM Research, Cambridge, MA, USA
Sarah Miller
IBM Research, Cambridge, MA, USA
DOI

10.1145/3313831.3376590

論文URL

https://doi.org/10.1145/3313831.3376590