Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute
説明

Local governments increasingly use artificial intelligence (AI) for automated decision-making. Contestability, making systems responsive to dispute, is a way to ensure they respect human rights to autonomy and dignity. We investigate the design of public urban AI systems for contestability through the example of camera cars: human-driven vehicles equipped with image sensors. Applying a provisional framework for contestable AI, we use speculative design to create a concept video of a contestable camera car. Using this concept video, we then conduct semi-structured interviews with 17 civil servants who work with AI employed by a large northwestern European city. The resulting data is analyzed using reflexive thematic analysis to identify the main challenges facing the implementation of contestability in public AI. We describe how civic participation faces issues of representation, public AI systems should integrate with existing democratic practices, and cities must expand capacities for responsible AI development and operation.

日本語まとめ
読み込み中…
読み込み中…
The Halting problem: Video analysis of self-driving cars in traffic
説明

Using publicly uploaded videos of the Waymo and Tesla FSD self-driving cars, this paper documents how self-driving vehicles still struggle with some basics of road interaction. To drive safely self-driving cars need to interact in traffic with other road users. Yet traffic is a complex, long established social domain. We focus on one core element of road interaction: when road users yield for each other. Yielding – slowing down for others in traffic – involves communication between different road users to decide who will ‘go’ and who will ‘yield’. Videos of the Waymo and Tesla FSD self-driving cars show how these systems fail to both yield for others, as well as failing to go when yielded to. In discussion, we explore how these ‘problems’ illustrate both the complexity of designing for road interaction, but also how the space of physical machine/human social interactions more broadly can be designed for.

日本語まとめ
読み込み中…
読み込み中…
Faulty or Ready? Handling Failures in Deep-Learning Computer Vision Models until Deployment: A Study of Practices, Challenges, and Needs
説明

Handling failures in computer vision systems that rely on deep learning models remains a challenge. While an increasing number of methods for bug identification and correction are proposed, little is known about how practitioners actually search for failures in these models. We perform an empirical study to understand the goals and needs of practitioners, the workflows and artifacts they use, and the challenges and limitations in their process. We interview 18 practitioners by probing them with a carefully crafted failure handling scenario. We observe that there is a great diversity of failure handling workflows in which cooperations are often necessary, that practitioners overlook certain types of failures and bugs, and that they generally do not rely on potentially relevant approaches and tools originally stemming from research. These insights allow to draw a list of research opportunities, such as creating a library of best practices and more representative formalisations of practitioners' goals, developing interfaces to exploit failure handling artifacts, as well as providing specialized training

日本語まとめ
読み込み中…
読み込み中…
fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks
説明

To design with AI models, user experience (UX) designers must assess the fit between the model and user needs. Based on user research, they need to contextualize the model's behavior and potential failures within their product-specific data instances and user scenarios. However, our formative interviews with ten UX professionals revealed that such a proactive discovery of model limitations is challenging and time-intensive. Furthermore, designers often lack technical knowledge of AI and accessible exploration tools, which challenges their understanding of model capabilities and limitations. In this work, we introduced a \textit{failure-driven design} approach to AI, a workflow that encourages designers to explore model behavior and failure patterns early in the design process. The implementation of \system, a designer-centered failure exploration and analysis tool, supports designers in evaluating models and identifying failures across diverse user groups and scenarios. Our evaluation with UX practitioners shows that \system outperforms today's interactive model cards in assessing context-specific model performance.

日本語まとめ
読み込み中…
読み込み中…
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
説明

Despite the widespread use of artificial intelligence (AI), designing user experiences (UX) for AI-powered systems remains challenging. UX designers face hurdles understanding AI technologies, such as pre-trained language models, as design materials. This limits their ability to ideate and make decisions about whether, where, and how to use AI. To address this problem, we bridge the literature on AI design and AI transparency to explore whether and how frameworks for transparent model reporting can support design ideation with pre-trained models. By interviewing 23 UX practitioners, we find that practitioners frequently work with pre-trained models, but lack support for UX-led ideation. Through a scenario-based design task, we identify common goals that designers seek model understanding for and pinpoint their model transparency information needs. Our study highlights the pivotal role that UX designers can play in Responsible AI and calls for supporting their understanding of AI limitations through model transparency and interrogation.

日本語まとめ
読み込み中…
読み込み中…
A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures
説明

Despite huge gains in performance in natural language understanding via large language models in recent years, voice assistants still often fail to meet user expectations. In this study, we conducted a mixed-methods analysis of how voice assistant failures affect users' trust in their voice assistants. To illustrate how users have experienced these failures, we contribute a crowdsourced dataset of 199 voice assistant failures, categorized across 12 failure sources. Relying on interview and survey data, we find that certain failures, such as those due to overcapturing users' input, derail user trust more than others. We additionally examine how failures impact users' willingness to rely on voice assistants for future tasks. Users often stop using their voice assistants for specific tasks that result in failures for a short period of time before resuming similar usage. We demonstrate the importance of low stakes tasks, such as playing music, towards building trust after failures.

日本語まとめ
読み込み中…
読み込み中…