Explainable, Responsible, Manageable AI

会議の名前
CHI 2023
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
要旨

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.

受賞
Honorable Mention
著者
Sunnie S. Y. Kim
Princeton University, Princeton, New Jersey, United States
Elizabeth A. Watkins
Intel Labs, Santa Clara, California, United States
Olga Russakovsky
Princeton University, Princeton, New Jersey, United States
Ruth Fong
Princeton University, Princeton, New Jersey, United States
Andrés Monroy-Hernández
Princeton University, Princeton, New Jersey, United States
論文URL

https://doi.org/10.1145/3544548.3581001

動画
Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence
要旨

Biases in Artificial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users’ perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.

著者
Chien Wen (Tina) Yuan
National Taiwan Normal University, Taipei City, Taiwan
Nanyi Bi
National Taiwan University, Taipei, Taiwan
Ya-Fang Lin
Penn State University, State college, Pennsylvania, United States
Yuan Hsien. Tseng
National Taiwan Normal University, Taipei City, --- Select One ---, Taiwan
論文URL

https://doi.org/10.1145/3544548.3580945

動画
“It is currently hodgepodge”: Examining AI/ML Practitioners’ Challenges during Co-production of Responsible AI Values
要旨

Recently, the AI/ML research community has indicated an urgent need to establish Responsible AI (RAI) values and practices as part of the AI/ML lifecycle. Several organizations and communities are responding to this call by sharing RAI guidelines. However, there are gaps in awareness, deliberation, and execution of such practices for multi-disciplinary ML practitioners. This work contributes to the discussion by unpacking co-production challenges faced by practitioners as they align their RAI values. We interviewed 23 individuals, across 10 organizations, tasked to ship AI/ML based products while upholding RAI norms and found that both top-down and bottom-up institutional structures create burden for different roles preventing them from upholding RAI values, a challenge that is further exacerbated when executing conflicted values. We share multiple value levers used as strategies by the practitioners to resolve their challenges. We end our paper with recommendations for inclusive and equitable RAI value-practices, creating supportive organizational structures and opportunities to further aid practitioners.

著者
Rama Adithya Varanasi
Cornell University, Ithaca, New York, United States
Nitesh Goyal
Google Research, New York, New York, United States
論文URL

https://doi.org/10.1145/3544548.3580903

動画
AutoML in The Wild: Obstacles, Workarounds, and Expectations
要旨

Automated machine learning (AutoML) is envisioned to make ML techniques accessible to ordinary users. Recent work has investigated the role of humans in enhancing AutoML functionality throughout a standard ML workflow. However, it is also critical to understand how users adopt existing AutoML solutions in complex, real-world settings from a holistic perspective. To fill this gap, this study conducted semi-structured interviews of AutoML users (N = 19) focusing on understanding (1) the limitations of AutoML encountered by users in their real-world practices, (2) the strategies users adopt to cope with such limitations, and (3) how the limitations and workarounds impact their use of AutoML. Our findings reveal that users actively exercise user agency to overcome three major challenges arising from customizability, transparency, and privacy. Furthermore, users make cautious decisions about whether and how to apply AutoML on a case-by-case basis. Finally, we derive design implications for developing future AutoML solutions.

受賞
Honorable Mention
著者
Yuan Sun
Pennsylvania State University, University Park, Pennsylvania, United States
Qiurong Song
Pennsylvania State University, University Park, Pennsylvania, United States
Xinning Gui
Pennsylvania State University, University Park, Pennsylvania, United States
Fenglong Ma
Pennsylvania State University, State College, Pennsylvania, United States
Ting Wang
Pennsylvania State University, University Park, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3581082

Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing
要旨

We are witnessing an emergence in Passive Sensing enabled AI (PSAI) to provide dynamic insights for performance and wellbeing of information workers. Hybrid work paradigms have simultaneously created new opportunities for PSAI, but have also fostered anxieties of misuse and privacy intrusions within a power asymmetry. At this juncture, it is unclear if those who are sensed can find these systems acceptable. We conducted scenario-based interviews of 28 information workers to highlight their perspectives as data subjects in PSAI. We unpack their expectations using the Contextual Integrity framework of privacy and information gathering. Participants described appropriateness of PSAI based on its impact on job consequences, work-life boundaries, and preservation of flexibility. They perceived that PSAI inferences could be shared with selected stakeholders if they could negotiate the algorithmic inferences. Our findings help envision worker-centric approaches to implementing PSAI as an empowering tool in the future of work.

著者
Vedant Das Swain
Georgia Institute of Technology, Atlanta, Georgia, United States
Lan Gao
Georgia Institute of Technology, Atlanta, Georgia, United States
William A. Wood
Georgia Institute of Technology, Atlanta, Georgia, United States
Srikruthi C Matli
Georgia Institute of Technology, Atlanta, Georgia, United States
Gregory D.. Abowd
Northeastern University, Boston, Massachusetts, United States
Munmun De Choudhury
Georgia Institute of Technology, Atlanta, Georgia, United States
論文URL

https://doi.org/10.1145/3544548.3581376

動画
Designing Responsible AI: Adaptations of UX Practice to Meet Responsible AI Challenges
要旨

Technology companies continue to invest in efforts to incorporate responsibility in their Artificial Intelligence (AI) advancements, while efforts to audit and regulate AI systems expand. This shift towards Responsible AI (RAI) in the tech industry necessitates new practices and adaptations to roles—undertaken by a variety of practitioners in more or less formal positions, many of whom focus on the user-centered aspects of AI. To better understand practices at the intersection of user experience (UX) and RAI, we conducted an interview study with industrial UX practitioners and RAI subject matter experts, both of whom are actively involved in addressing RAI concerns throughout the early design and development of new AI-based prototypes, demos, and products, at a large technology company. Many of the specifc practices and their associated challenges have yet to be surfaced in the literature, and distilling them offers a critical view into how practitioners’ roles are adapting to meet present-day RAI challenges. We present and discuss three emerging practices in which RAI is being enacted and reifed in UX practitioners’ everyday work. We conclude by arguing that the emerging practices, goals, and types of expertise that surfaced in our study point to an evolution in praxis, with associated challenges that suggest important areas for further research in HCI.

著者
Qiaosi Wang
Georgia Institute of Technology, Atlanta, Georgia, United States
Michael Madaio
Google Research, New York, New York, United States
Shaun Kane
Google Research, Boulder, Colorado, United States
Shivani Kapania
Google Research, Bengaluru, India
Michael Terry
Google Research, Cambridge, Massachusetts, United States
Lauren Wilcox
Google Research, Mountain VIew, California, United States
論文URL

https://doi.org/10.1145/3544548.3581278

動画