Interactive ML & recommender systems

Paper session

会議の名前
CHI 2020
No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML
要旨

Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust. This paper investigates how explanations shape users' perceptions of ML models with or without the ability to provide feedback to them:(1) does revealing model flaws increase users' desire to "fix" them;(2) does providing explanations cause users to believe – wrongly – that models are introspective, and will thus improve over time. Through two controlled experiments – varying model quality – we show how the combination of explanations and user feedback impacted perceptions, such as frustration and expectations of model improvement.Explanations without opportunity for feedback were frustrating with a lower quality model, while interactions between explanation and feedback for the higher quality model suggest that detailed feedback should not be requested without explanation. Users expected model correction, regardless of whether they provided feedback or received explanations.

キーワード
interactive machine learning
explainable machine learning
著者
Alison Smith-Renner
University of Maryland, College Park, MD, USA
Ron Fan
University of Washington, Seattle, WA, USA
Melissa Birchfield
University of Washington, Seattle, WA, USA
Tongshuang Wu
University of Washington, Seattle, WA, USA
Jordan Boyd-Graber
University of Maryland, College Park, MD, USA
Daniel S. Weld
University of Washington, Seattle, WA, USA
Leah Findlater
University of Washington, Seattle, WA, USA
DOI

10.1145/3313831.3376624

論文URL

https://doi.org/10.1145/3313831.3376624

Silva: Interactively Assessing Machine Learning Fairness Using Causality
要旨

Machine learning models risk encoding unfairness on the part of their developers or data sources. However, assessing fairness is challenging as analysts might misidentify sources of bias, fail to notice them, or misapply metrics. In this paper we introduce Silva, a system for exploring potential sources of unfairness in datasets or machine learning models interactively. Silva directs user attention to relationships between attributes through a global causal view, provides interactive recommendations, presents intermediate results, and visualizes metrics. We describe the implementation of Silva, identify salient design and technical challenges, and provide an evaluation of the tool in comparison to an existing fairness optimization tool.

キーワード
Machine Learning Fairness
bias
interactive system
著者
Jing Nathan Yan
Cornell University, Ithaca, NY, USA
Ziwei Gu
Cornell University, Ithaca, NY, USA
Hubert Lin
Cornell University, Ithaca, NY, USA
Jeffrey M. Rzeszotarski
Cornell University, Ithaca, NY, USA
DOI

10.1145/3313831.3376447

論文URL

https://doi.org/10.1145/3313831.3376447

動画
ICONATE: Automatic Compound Icon Generation and Ideation
要旨

Compound icons are prevalent on signs, webpages, and infographics, effectively conveying complex and abstract concepts, such as "no smoking" and "health insurance", with simple graphical representations. However, designing such icons requires experience and creativity, in order to efficiently navigate the semantics, space, and style features of icons. In this paper, we aim to automate the process of generating icons given compound concepts, to facilitate rapid compound icon creation and ideation. Informed by ethnographic interviews with professional icon designers, we have developed ICONATE, a novel system that automatically generates compound icons based on textual queries and allows users to explore and customize the generated icons. At the core of ICONATE is a computational pipeline that automatically finds commonly used icons for sub-concepts and arranges them according to inferred conventions. To enable the pipeline, we collected a new dataset, Compicon1k, consisting of 1000 compound icons annotated with semantic labels (i.e., concepts). Through user studies, we have demonstrated that our tool is able to automate or accelerate the compound icon design process for both novices and professionals.

キーワード
Compound Icon
Ideogram
Pictogram
Icon Design
Graphic Design
Design Tools
著者
Nanxuan Zhao
Harvard University & City University of Hong Kong, Cambridge, MA, USA
Nam Wook Kim
Boston College, Chestnut Hill, MA, USA
Laura Mariah Herman
Adobe Inc., San Francisco, CA, USA
Hanspeter Pfister
Harvard University, Cambridge, MA, USA
Rynson W.H. Lau
City University of Hong Kong, Hong Kong, China
Jose Echevarria
Adobe Research, San Jose, CA, USA
Zoya Bylinskii
Adobe Research, Cambridge, MA, USA
DOI

10.1145/3313831.3376618

論文URL

https://doi.org/10.1145/3313831.3376618

動画
A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores
要旨

The increased use of algorithmic predictions in sensitive domains has been accompanied by both enthusiasm and concern. To understand the opportunities and risks of these technologies, it is key to study how experts alter their decisions when using such tools. In this paper, we study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions. We focus on the question: Are humans capable of identifying cases in which the machine is wrong, and of overriding those recommendations? We first show that humans do alter their behavior when the tool is deployed. Then, we show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk, even when overriding the recommendation requires supervisory approval. These results highlight the risks of full automation and the importance of designing decision pipelines that provide humans with autonomy.

キーワード
Human-in-the-loop
Decision support
Algorithm aversion
Automation bias
Algorithm assisted decision making
Child welfare
著者
Maria De-Arteaga
Carnegie Mellon University, Pittsburgh, PA, USA
Riccardo Fogliato
Carnegie Mellon University, Pittsburgh, PA, USA
Alexandra Chouldechova
Carnegie Mellon University, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376638

論文URL

https://doi.org/10.1145/3313831.3376638

Dziban: Balancing Agency & Automation in Visualization Design via Anchored Recommendations
要旨

Visualization recommender systems attempt to automate design decisions spanning choices of selected data, transformations, and visual encodings. However, across invocations such recommenders may lack the context of prior results, producing unstable outputs that override earlier design choices. To better balance automated suggestions with user intent, we contribute Dziban, a visualization API that supports both ambiguous specification and a novel anchoring mechanism for conveying desired context. Dziban uses the Draco knowledge base to automatically complete partial specifications and suggest appropriate visualizations. In addition, it extends Draco with chart similarity logic, enabling recommendations that also remain perceptually similar to a provided "anchor" chart. Existing APIs for exploratory visualization, such as ggplot2 and Vega-Lite, require fully specified chart definitions. In contrast, Dziban provides a more concise and flexible authoring experience through automated design, while preserving predictability and control through anchored recommendations.

キーワード
visualization
recommendation
anchoring
language
著者
Halden Lin
University of Washington, Seattle, WA, USA
Dominik Moritz
University of Washington, Seattle, WA, USA
Jeffrey Heer
University of Washington, Seattle, WA, USA
DOI

10.1145/3313831.3376880

論文URL

https://doi.org/10.1145/3313831.3376880

動画