開始時刻
前のセッションの直後
発表数
14
時間
10分
目次

終了した勉強会

この勉強会は終了しました。ご参加ありがとうございました。

Mind The Gap: Designers and Standards on Algorithmic System Transparency for Users
説明

Many call for algorithmic systems to be more transparent, yet it is often unclear for designers how to do so in practice. Standards are emerging that aim to support designers in building transparent systems, e.g by setting testable transparency levels, but their efficacy in this regard is not yet understood. In this paper, we use the `Standard for Transparency of Autonomous Systems' (IEEE 7001) to explore designers' understanding of algorithmic system transparency, and the degree to which their perspectives align with the standard's recommendations. Our mixed-method study reveals participants consider transparency important, difficult to implement, and welcome support. However, despite IEEE 7001's potential, many did not find its recommendations particularly appropriate. Given the importance and increased attention on transparency, and because standards like this purport to guide system design, our findings reveal the need for `bridging the gap,' through (i) raising designers’ awareness about the importance of algorithmic system transparency, alongside (ii) better engagement between stakeholders (i.e. standards bodies, designers, users). We further identify opportunities towards developing transparency best practices, as means to help drive more responsible systems going forward.

日本語まとめ
読み込み中…
読み込み中…
I lose vs. I earn: Consumer perceived price fairness toward algorithmic (vs. human) price discrimination
説明

Many companies are turning to algorithms to determine prices. However, little research has been done to investigate consumers’ perceived price fairness when price discrimination is implemented by either a human or an algorithm. The results of two experiments with 2 (price-setting agent: algorithm vs. human) × 2 (price discrimination: advantaged vs. disadvantaged) between-subjects design reveal that consumers perceive disadvantaged price discrimination as being more unfair when it is implemented by a human (vs. algorithm). Conversely, they perceive advantaged price discrimination as being more unfair when it is implemented by an algorithm (vs. human). This difference is caused by distinct attribution processes. Consumers are more likely to externalize disadvantaged price discrimination implemented by a human than an algorithm (i.e., attributing it to the unintentionality of price-setting agents), while they are more likely to internalize advantaged price discrimination implemented by a human than an algorithm (i.e., attributing it to perceived personal luck). Based on these findings, we discuss how designers and managers can design and utilize algorithms to implement price discrimination that reduces consumer perception of price unfairness. We believe that reasonable disclosure of algorithmic clues to consumers can maximize the benefits of price discrimination strategies.

日本語まとめ
読み込み中…
読み込み中…
Towards a Non-Ideal Methodological Framework for Responsible ML
説明

Though ML practitioners increasingly employ various Responsible ML (RML) strategies, their methodological approach in practice is still unclear. In particular, the constraints, assumptions, and choices of practitioners with technical duties--such as developers, engineers, and data scientists---are often implicit, subtle, and under-scrutinized in HCI and related fields. We interviewed 22 technically oriented ML practitioners across seven domains to understand the characteristics of their methodological approaches to RML through the lens of ideal and non-ideal theorizing of fairness. We find that practitioners’ methodological approaches fall along a spectrum of idealization. While they structured their approaches through ideal theorizing, such as by abstracting RML workflow from the inquiry of applicability of ML, they did not systematically document nor pay deliberate attention to their non-ideal approaches, such as diagnosing imperfect conditions. We end our paper with a discussion of a new methodological approach, inspired by elements of non-ideal theory, to structure technical practitioners’ RML process and facilitate collaboration with other stakeholders.

日本語まとめ
読み込み中…
読み込み中…
(Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in Court
説明

Accountable use of AI systems in high-stakes settings relies on making systems contestable. In this paper we study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court. We present findings from interviews with 17 people in the U.S. public defense community to understand their perceptions of and experiences scrutinizing computational forensic software (CFS) --- automated decision systems that the government uses to convict and incarcerate, such as facial recognition, gunshot detection, and probabilistic genotyping tools. We find that our participants faced challenges assessing and contesting CFS reliability due to difficulties (a) navigating how CFS is developed and used, (b) overcoming judges and jurors’ non-critical perceptions of CFS, and (c) gathering CFS expertise. To conclude, we provide recommendations that center the technical, social, and institutional context to better position interventions such as performance evaluations to support contestability in practice.

日本語まとめ
読み込み中…
読み込み中…
“The bus is nothing without us”: Making Visible the Labor of Bus Operators amid the Ongoing Push Towards Transit Automation
説明

This paper describes how the complexity of circumstances bus operators manage presents unique challenges to the feasibility of high-level automation in public transit. Avoiding an overly rationalized view of bus operators' labor is critical to ensure the introduction of automation technologies does not compromise public wellbeing, the dignity of transit workers, or the integrity of critical public infrastructure. Our findings from a group interview study show that bus operators take on work — undervalued by those advancing automation technologies — to ensure the well-being of passengers and community members. Notably, bus operators are positioned to function as shock absorbers during social crises in their communities and in moments of technological breakdown as new systems come on board. These roles present a critical argument against the rapid push toward driverless automation in public transit. We conclude by identifying opportunities for participatory design and collaborative human-machine teaming for a more just future of transit.

日本語まとめ
読み込み中…
読み込み中…
Care-Based Eco-Feedback Augmented with Generative AI: Fostering Pro-Environmental Behavior through Emotional Attachment
説明

Lights out! With the escalating climate crisis, eco-feedback has gained prominence over the last decade. However, traditional approaches could be underperforming as they often use data-driven strategies and assume that people only need additional information about their consumption to change behavior. A proposed path to overcome this issue is to design eco-feedback to foster emotional connections with users. However, not much is known about the effectiveness of such designs. In this paper, we propose a novel care-based eco-feedback system. Central to the system is a Tamagotchi-inspired digital character named INFI who gets its life force from the user's energy savings. Additionally, we harness the latest advancements in generative artificial intelligence to enhance emotional attachment through conversational interactions that users can have with INFI. The results of a randomized controlled experiment (N=420) convey the fact that this design increases emotional attachment, which in turn increases energy-saving behavior.

日本語まとめ
読み込み中…
読み込み中…
DeepTreeSketch: Neural Graph Prediction for Faithful 3D Tree Modeling from Sketches
説明

We present DeepTreeSketch, a novel AI-assisted sketching system that enables users to create realistic 3D tree models from 2D freehand sketches. Our system leverages a tree graph prediction network, TGP-Net, to learn the underlying structural patterns of trees from a large collection of 3D tree models. The TGP-Net simulates the iterative growth of botanical trees and progressively constructs the 3D tree structures in a bottom-up manner. Furthermore, our system supports a flexible sketching mode for both precise and coarse control of the tree shapes by drawing branch strokes and foliage strokes, respectively. Combined with a procedural generation strategy, users can freely control the foliage propagation with diverse and fine details. We demonstrate the expressiveness, efficiency, and usability of our system through various experiments and user studies. Our system offers a practical tool for 3D tree creation, especially for natural scenes in games, movies, and landscape applications.

日本語まとめ
読み込み中…
読み込み中…
Amplifying Human Capabilities in Prostate Cancer Diagnosis: An Empirical Study of Current Practices and AI Potentials in Radiology
説明

This paper examines the potential of Human-Centered AI (HCAI) solutions to support radiologists in diagnosing prostate cancer. Prostate cancer is one of the most prevalent and increasing cancers among men. The scarcity of radiologists raises concerns about their ability to address the growing demand for prostate cancer diagnosis, leading to a significant surge in the workload of radiologists. Drawing on an HCAI approach, we sought to understand the current practices concerning radiologists' work on detecting and diagnosing prostate cancer, as well as the challenges they face. The findings from our empirical studies point toward the potential that AI has to expedite informed decision-making and enhance accuracy, efficiency, and consistency. This is particularly beneficial for collaborative prostate cancer diagnosis processes. We discuss these results and introduce design recommendations and HCAI concepts for the domain of prostate cancer diagnosis, with the aim of amplifying the professional capabilities of radiologists.

日本語まとめ
読み込み中…
読み込み中…
Data Ethics Emergency Drill: A Toolbox for Discussing Responsible AI for Industry Teams
説明

Researchers urge technology practitioners such as data scientists to consider the impacts and ethical implications of algorithmic decisions. However, unlike programming, statistics, and data management, discussion of ethical implications is rarely included in standard data science training. To begin to address this gap, we designed and tested a toolbox called the data ethics emergency drill (DEED) to help data science teams discuss and reflect on the ethical implications of their work. The DEED is a roleplay of a fictional ethical emergency scenario that is contextually situated in the team’s specific workplace and applications. This paper outlines the DEED toolbox and describes three studies carried out with two different data science teams that iteratively shaped its design. Our findings show that practitioners can apply lessons learnt from the roleplay to real-life situations, and how the DEED opened up conversations around ethics and values.

日本語まとめ
読み込み中…
読み込み中…
JupyterLab in Retrograde: Contextual Notifications That Highlight Fairness and Bias Issues for Data Scientists
説明

Current algorithmic fairness tools focus on auditing completed models, neglecting the potential downstream impacts of iterative decisions about cleaning data and training machine learning models. In response, we developed Retrograde, a JupyterLab environment extension for Python that generates real-time, contextual notifications for data scientists about decisions they are making regarding protected classes, proxy variables, missing data, and demographic differences in model performance. Our novel framework uses automated code analysis to trace data provenance in JupyterLab, enabling these notifications. In a between-subjects online experiment, 51 data scientists constructed loan-decision models with Retrograde providing notifications continuously throughout the process, only at the end, or never. Retrograde's notifications successfully nudged participants to account for missing data, avoid using protected classes as predictors, minimize demographic differences in model performance, and exhibit healthy skepticism about their models.

日本語まとめ
読み込み中…
読み込み中…
Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public Services
説明

Policymakers have established that the ability to contest decisions made by or with algorithms is core to responsible artificial intelligence (AI). However, there has been a disconnect between research on contestability of algorithms, and what the situated practice of contestation looks like in contexts across the world, especially amongst communities on the margins. We address this gap through a qualitative study of follow-up and contestation in accessing public services for land ownership in rural India and affordable housing in the urban United States. We find there are significant barriers to exercising rights and contesting decisions, which intermediaries like NGO workers or lawyers work with communities to address. We draw on the notion of accompaniment in global health to highlight the open-ended work required to support people in navigating violent social systems. We discuss the implications of our findings for key aspects of contestability, including building capacity for contestation, human review, and the role of explanations. We also discuss how sociotechnical systems of algorithmic decision-making can embody accompaniment by taking on a higher burden of preventing denials and enabling contestation.

日本語まとめ
読み込み中…
読み込み中…
In-Between Visuals and Visible: The Impacts of Text-to-Image Generative AI Tools on Digital Image-making Practices in the Global South
説明

This paper joins the growing body of HCI work on critical AI studies and focuses on the impact of Generative Artificial Intelligence (GAI) tools in Bangladesh. While the West has started to examine the limitations and risks associated with these tools, their impacts on the Global South have remained understudied. Based on our interviews, focus group discussions (FGD), and social media-based qualitative study, this paper reports how popular text-to-image GAI tools (ex., DALL-E, Midjourney, Stable Diffusion, Firefly) are affecting various image-related local creative fields. We report how these tools limit the creative explorations of marginal artists, struggle to understand linguistic nuances, fail to generate local forms of art and architecture, and misrepresent the diversity among citizens in the image production process. Drawing from a rich body of work on critical image theory, postcolonial computing, and design politics, we explain how our findings are pertinent to HCI's broader interest in social justice, decolonization, and global development.

日本語まとめ
読み込み中…
読み込み中…
Towards Building Condition-Based Cross-Modality Intention-Aware Human-AI Cooperation under VR Environment
説明

To address critical challenges in effectively identifying user intent and forming relevant information presentations and recommendations in VR environments, we propose an innovative condition-based multi-modal human-AI cooperation framework. It highlights the intent tuples (intent, condition, intent prompt, action prompt) and 2-Large-Language-Models (2-LLMs) architecture. This design, utilizes ``condition'' as the core to describe tasks, dynamically match user interactions with intentions, and empower generations of various tailored multi-modal AI responses. The architecture of 2-LLMs separates the roles of intent detection and action generation, decreasing the prompt length and helping with generating appropriate responses. We implemented a VR-based intelligent furniture purchasing system based on the proposed framework and conducted a three-phase comparative user study. The results conclusively demonstrate the system's superiority in time efficiency and accuracy, intention conveyance improvements, effective product acquisitions, and user satisfaction and cooperation preference. Our framework provides a promising approach towards personalized and efficient user experiences in VR.

日本語まとめ
読み込み中…
読み込み中…
MindTalker: Navigating the Complexities of AI-Enhanced Social Engagement for People with Early-Stage Dementia
説明

People living with dementia are at risk of social isolation, and conversational AI agents can potentially support such individuals by reducing their loneliness. In our study, a conversational AI agent, called MindTalker, co-designed with therapists and utilizing the GPT-4 Large Language Model (LLM), was developed to support people with early-stage dementia, allowing them to experience a new type of “social relationship” that could be extended to real life. Eight PwD engaged with MindTalker for one month or even longer, and data was collected from interviews. Our findings emphasized that participants valued the novelty of AI, but sought more consistent, deeper interactions. They desired a personal touch from AI, while stressing the irreplaceable value of human interactions. The findings underscore the complexities of AI engagement dynamics, where participants commented on the artificial nature of AI, highlighting important insights into the future design of conversational AI for this population.

日本語まとめ
読み込み中…
読み込み中…