Interpreting and Explaining AI

会議の名前
CSCW2021
Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance
要旨

Algorithms in online platforms interact with users' identities in different ways. However, little is known about how users understand the interplay between identity and algorithmic processes on these platforms, and if and how such understandings shape their behavior on these platforms in return. Through semi-structured interviews with 15 US-based TikTok users, we detail users' algorithmic folk theories of the For You Page algorithm in relation to two inter-connected identity types: person and social identity. Participants identified potential harms that can accompany algorithms' tailoring content to their person identities. Further, they believed the algorithm actively suppresses content related to marginalized social identities based on race and ethnicity, body size and physical appearance, ability status, class status, LGBTQ identity, and political and social justice group affiliation. We propose a new algorithmic folk theory of social feeds—The Identity Strainer Theory—to describe when users believe an algorithm filters out and suppresses certain social identities. In developing this theory, we introduce the concept of algorithmic privilege as held by users positioned to benefit from algorithms on the basis of their identities. We further propose the concept of algorithmic representational harm to refer to the harm users experience when they lack algorithmic privilege and are subjected to algorithmic symbolic annihilation. Additionally, we describe how participants changed their behaviors to shape their algorithmic identities to align with how they understood themselves, as well as to resist the suppression of marginalized social identities and lack of algorithmic privilege via individual actions, collective actions, and altering their performances. We theorize our findings to detail the ways the platform's algorithm and its users co-produce knowledge of identity on the platform. We argue the relationship between users’ algorithmic folk theories and identity are consequential for social media platforms, as it impacts users' experiences, behaviors, sense of belonging, and perceived ability to be seen, heard, and feel valued by others as mediated through algorithmic systems.

受賞
Honorable Mention
著者
Nadia Karizat
University of Michigan, Ann Arbor, Michigan, United States
Daniel Delmonaco
University of Michigan, Ann Arbor, Michigan, United States
Motahhare Eslami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Nazanin Andalibi
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3476046

How do People Train a Machine? Strategies and (Mis)Understandings
要旨

Machine learning has become pervasive in modern interactive technology due to the wide range of complex tasks it can handle. However, most machine learning systems provide users with surprisingly little, if any, agency with respect to how their models are trained from data. In this paper, we explore the way people could handle learning algorithms, what they understand from their behavior and what strategy they may use to “make it work”. To address these questions, we developed an web-based sketch-based recognition algorithm, called Marcelle-Sketch, that end-users can teach. We present two experimental studies that investigate people's strategies, beliefs and (mis)understandings in a realistic algorithm-teaching task. Study one took place in online workshop that collected drawing data from 22 novice users and analyzed their teaching strategies. Study two involved eight participants who performed a similar task during individual teaching sessions, using a think-aloud protocol. Our results show that users have different inputs scheduling. Their strategy incorporate investigations of the model's capabilities using input variability, driving changes in users understanding of machine learning during the session.We conclude with implications for the design of richer, more human-centered forms of interactions with machine learning, and impact for ML education and democratization.

著者
Téo Sanchez
Université Paris Saclay, Gif-sur-Yvette, France
Baptiste Caramiaux
Jules Françoise
CNRS, Gif-sur-Yvette, France
Frederic Bevilacqua
STMS IRCAM-CNRS-Sorbonne Université, Paris, France
Wendy E.. Mackay
Université Paris Saclay, Gif-sur-Yvette, France
論文URL

https://doi.org/10.1145/3449236

動画
Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms
要旨

The increased importance of opaque, algorithmically-driven social platforms (e.g., Facebook, YouTube) to everyday users as a medium for self-presentation effectively requires users to speculate on how platforms work in order to decide how to behave to achieve their self-presentation goals. This speculation takes the form of folk theorization. Because platforms constantly change, users must constantly re-evaluate their folk theories. Based on an Asynchronous Remote Community study of LGBTQ+ social platform users with heightened self-presentation concerns, I present an updated model of the folk theorization process to account for platform change. Moreover, I find that both the complexity of the user’s folk theorization and their overall relationship with the platform impact this theorization process, and present new concepts for examining and classifying these elements: theorization complexity level and perceived platform spirit. I conclude by proposing a folk theorization-based path towards an extensible algorithmic literacy which would support users in ongoing theorization.

受賞
Honorable Mention
著者
Michael Ann DeVito
University of Colorado Boulder, Boulder, Colorado, United States
論文URL

https://doi.org/10.1145/3476080

動画
Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents
要旨

While philosophers hold that it is patently absurd to blame robots or hold them morally responsible (e.g. Sparrow, 2007), a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts (e.g. Malle et al. 2016). This is disconcerting: Blame might be shifted from owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents (Kneer & Stuart, 2021). In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk’s willingness to ascribe inculpating mental states or “mens rea” to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) judgments of attributed mental states were, as suspected, similar across agent types. This raised the question – also explored in the experiment – whether people attribute knowledge and desire to robots in a metaphorical way (e.g. the robot “knew” rather than actually knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, which might lead to inappropriately letting the responsible human agent off the moral hook.

著者
Michael T.. Stuart
University of Tubingen, Tubingen, Germany
Markus Kneer
University of Zurich, Zurich, Switzerland
論文URL

https://doi.org/10.1145/3479507

動画
Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models
要旨

An important challenge in building explainable artificially intelligent (AI) systems is designing interpretable explanations. AI models often use low-level data features which may be hard for humans to interpret. Recent research suggests that situating machine decisions in abstract, human understandable concepts can help. However, it is challenging to determine the right level of conceptual mapping. In this research, we use granularity (of data features) and context (of data instance) as ways to determine this conceptual mapping. Based on these measures, we explore strategies for designing explanations in classification models. We introduce an end-to-end concept elicitation pipeline that supports gathering high-level concepts for a given data set. Through crowd-sourced experiments, we examine how providing conceptual information shapes the effectiveness of explanations, finding that a balance between coarse and fine-grained explanations help users better estimate model predictions. We organize our findings into systematic themes that can inform design considerations for future systems.

著者
Swati Mishra
Cornell University, Ithaca, New York, United States
Jeffrey M. Rzeszotarski
Cornell University, Ithaca, New York, United States
論文URL

https://doi.org/10.1145/3449213

動画
Trkic G00gle: Why and How Users Game Translation Algorithms
要旨

Individuals interact with algorithms in various ways. Users even game and circumvent algorithms so as to achieve favorable outcomes. This study aims to come to an understanding of how various stakeholders interact with each other in tricking algorithms, with a focus towards online review communities. We employed a mixed-method approach in order to explore how and why users write machine non-translatable reviews as well as how those encrypted messages are perceived by those receiving them. We found that users are able to find tactics to trick the algorithms in order to avoid censoring, to mitigate interpersonal burden, to protect privacy, and to provide authentic information for enabling the formation of informative review communities. They apply several linguistic and social strategies in this regard. Furthermore, users perceive encrypted messages as both more trustworthy and authentic. Based on these findings, we discuss implications for online review community and content moderation algorithms.

著者
Soomin Kim
Seoul National University, Seoul, Korea, Republic of
Changhoon Oh
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Won Ik Cho
Seoul National University, Seoul, Korea, Republic of
Donghoon Shin
Seoul National University, Seoul, Korea, Republic of
Bongwon Suh
Seoul National University, Seoul, Korea, Republic of
Joonhwan Lee
Seoul National University, Seoul, Korea, Republic of
論文URL

https://doi.org/10.1145/3476085

動画
Co-Designing AI Literacy Exhibits for Informal Learning Spaces
要旨

AI is becoming increasingly integrated in common technologies, which suggests that learning experiences for audiences seeking a “casual” understanding of AI—i.e. understanding how a search engine works, not necessarily understanding how to program one—is an increasingly important design space. Informal learning spaces like museums are particularly well-suited for such public science communication efforts, but there is little research investigating how to design AI learning experiences for these spaces. This paper explores how to design museum experiences that communicate key concepts about AI, using collaboration, creativity, and embodiment as inspirations for design. We present the design of five low-fidelity AI literacy exhibit prototypes and results from a thematic analysis of participant interactions during a co-design workshop in which family groups interacted with the prototypes and designed exhibits of their own. Our findings suggest new topics and design considerations for AI-related exhibits and directions for future research.

著者
Duri Long
Georgia Institute of Technology, Atlanta, Georgia, United States
Takeria S.. Blunt
Georgia Institute of Technology, Atlanta, Georgia, United States
Brian Magerko
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3476034

動画
To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
要旨

People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the AI decisions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Informed by the dual-process theory of cognition, we posit that people rarely engage analytically with each individual AI recommendation and explanation, and instead develop general heuristics about whether and when to follow the AI suggestions. Building on prior research on medical decision-making, we designed three cognitive forcing interventions to compel people to engage more thoughtfully with the AI-generated explanations. We conducted an experiment (N=199), in which we compared our three cognitive forcing designs, to two simple explainable AI approaches, and to a no-AI baseline. The results demonstrate that cognitive forcing significantly reduced overreliance compared to the simple explainable AI approaches. However, there was a trade-off: people assigned the least favorable subjective ratings to the designs that reduced the overreliance the most. To audit our work for intervention-generated inequalities, we investigated whether our interventions benefited equally people with different levels of Need for Cognition (i.e., motivation to engage in effortful mental activities). Our results show that, on average, cognitive forcing interventions benefited participants higher in Need for Cognition more. Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.

著者
Zana Buçinca
Harvard University, Cambridge, Massachusetts, United States
Maja Barbara. Malaya
Institute of Applied Computer Science, Lodz, Poland
Krzysztof Z.. Gajos
Harvard University, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3449287

動画