89. Explainable, Responsible, Manageable AI (発表なし)

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
説明

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.

日本語まとめ
読み込み中…
読み込み中…
Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence
説明

Biases in Artificial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users’ perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.

日本語まとめ
読み込み中…
読み込み中…
“It is currently hodgepodge”: Examining AI/ML Practitioners’ Challenges during Co-production of Responsible AI Values
説明

Recently, the AI/ML research community has indicated an urgent need to establish Responsible AI (RAI) values and practices as part of the AI/ML lifecycle. Several organizations and communities are responding to this call by sharing RAI guidelines. However, there are gaps in awareness, deliberation, and execution of such practices for multi-disciplinary ML practitioners. This work contributes to the discussion by unpacking co-production challenges faced by practitioners as they align their RAI values. We interviewed 23 individuals, across 10 organizations, tasked to ship AI/ML based products while upholding RAI norms and found that both top-down and bottom-up institutional structures create burden for different roles preventing them from upholding RAI values, a challenge that is further exacerbated when executing conflicted values. We share multiple value levers used as strategies by the practitioners to resolve their challenges. We end our paper with recommendations for inclusive and equitable RAI value-practices, creating supportive organizational structures and opportunities to further aid practitioners.

日本語まとめ
読み込み中…
読み込み中…
AutoML in The Wild: Obstacles, Workarounds, and Expectations
説明

Automated machine learning (AutoML) is envisioned to make ML techniques accessible to ordinary users. Recent work has investigated the role of humans in enhancing AutoML functionality throughout a standard ML workflow. However, it is also critical to understand how users adopt existing AutoML solutions in complex, real-world settings from a holistic perspective. To fill this gap, this study conducted semi-structured interviews of AutoML users (N = 19) focusing on understanding (1) the limitations of AutoML encountered by users in their real-world practices, (2) the strategies users adopt to cope with such limitations, and (3) how the limitations and workarounds impact their use of AutoML. Our findings reveal that users actively exercise user agency to overcome three major challenges arising from customizability, transparency, and privacy. Furthermore, users make cautious decisions about whether and how to apply AutoML on a case-by-case basis. Finally, we derive design implications for developing future AutoML solutions.

日本語まとめ
読み込み中…
読み込み中…
Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing
説明

We are witnessing an emergence in Passive Sensing enabled AI (PSAI) to provide dynamic insights for performance and wellbeing of information workers. Hybrid work paradigms have simultaneously created new opportunities for PSAI, but have also fostered anxieties of misuse and privacy intrusions within a power asymmetry. At this juncture, it is unclear if those who are sensed can find these systems acceptable. We conducted scenario-based interviews of 28 information workers to highlight their perspectives as data subjects in PSAI. We unpack their expectations using the Contextual Integrity framework of privacy and information gathering. Participants described appropriateness of PSAI based on its impact on job consequences, work-life boundaries, and preservation of flexibility. They perceived that PSAI inferences could be shared with selected stakeholders if they could negotiate the algorithmic inferences. Our findings help envision worker-centric approaches to implementing PSAI as an empowering tool in the future of work.

日本語まとめ
読み込み中…
読み込み中…
Designing Responsible AI: Adaptations of UX Practice to Meet Responsible AI Challenges
説明

Technology companies continue to invest in efforts to incorporate responsibility in their Artificial Intelligence (AI) advancements, while efforts to audit and regulate AI systems expand. This shift towards Responsible AI (RAI) in the tech industry necessitates new

practices and adaptations to roles—undertaken by a variety of practitioners in more or less formal positions, many of whom focus on the user-centered aspects of AI. To better understand practices at the intersection of user experience (UX) and RAI, we conducted an interview study with industrial UX practitioners and RAI subject matter experts, both of whom are actively involved in addressing RAI concerns throughout the early design and development of new AI-based prototypes, demos, and products, at a large technology company. Many of the specifc practices and their associated challenges have yet to be surfaced in the literature, and distilling them offers a critical view into how practitioners’ roles are adapting to meet present-day RAI challenges. We present and discuss three emerging practices in which RAI is being enacted and reifed in UX practitioners’ everyday work. We conclude by arguing that the emerging practices, goals, and types of expertise that surfaced in our study point to an evolution in praxis, with associated challenges that suggest important areas for further research in HCI.

日本語まとめ
読み込み中…
読み込み中…