Remote Presentations: Highlight on AI

会議の名前
CHI 2024
Mind The Gap: Designers and Standards on Algorithmic System Transparency for Users
要旨

Many call for algorithmic systems to be more transparent, yet it is often unclear for designers how to do so in practice. Standards are emerging that aim to support designers in building transparent systems, e.g by setting testable transparency levels, but their efficacy in this regard is not yet understood. In this paper, we use the `Standard for Transparency of Autonomous Systems' (IEEE 7001) to explore designers' understanding of algorithmic system transparency, and the degree to which their perspectives align with the standard's recommendations. Our mixed-method study reveals participants consider transparency important, difficult to implement, and welcome support. However, despite IEEE 7001's potential, many did not find its recommendations particularly appropriate. Given the importance and increased attention on transparency, and because standards like this purport to guide system design, our findings reveal the need for `bridging the gap,' through (i) raising designers’ awareness about the importance of algorithmic system transparency, alongside (ii) better engagement between stakeholders (i.e. standards bodies, designers, users). We further identify opportunities towards developing transparency best practices, as means to help drive more responsible systems going forward.

著者
bianca schor
University of Cambridge, cambridge, United Kingdom
Chris Norval
University of Cambridge, Cambridge, United Kingdom
Ellen Charlesworth
Durham University, Durham, United Kingdom
Jat Singh
University of Cambridge, Cambridge, United Kingdom
論文URL

doi.org/10.1145/3613904.3642531

動画
I lose vs. I earn: Consumer perceived price fairness toward algorithmic (vs. human) price discrimination
要旨

Many companies are turning to algorithms to determine prices. However, little research has been done to investigate consumers’ perceived price fairness when price discrimination is implemented by either a human or an algorithm. The results of two experiments with 2 (price-setting agent: algorithm vs. human) × 2 (price discrimination: advantaged vs. disadvantaged) between-subjects design reveal that consumers perceive disadvantaged price discrimination as being more unfair when it is implemented by a human (vs. algorithm). Conversely, they perceive advantaged price discrimination as being more unfair when it is implemented by an algorithm (vs. human). This difference is caused by distinct attribution processes. Consumers are more likely to externalize disadvantaged price discrimination implemented by a human than an algorithm (i.e., attributing it to the unintentionality of price-setting agents), while they are more likely to internalize advantaged price discrimination implemented by a human than an algorithm (i.e., attributing it to perceived personal luck). Based on these findings, we discuss how designers and managers can design and utilize algorithms to implement price discrimination that reduces consumer perception of price unfairness. We believe that reasonable disclosure of algorithmic clues to consumers can maximize the benefits of price discrimination strategies.

著者
Xiaoping Zhang
Renmin University of China, Beijing, Beijing , China
Xusen Cheng
Renmin University of China, Beijing, China
論文URL

doi.org/10.1145/3613904.3642280

動画
Towards a Non-Ideal Methodological Framework for Responsible ML
要旨

Though ML practitioners increasingly employ various Responsible ML (RML) strategies, their methodological approach in practice is still unclear. In particular, the constraints, assumptions, and choices of practitioners with technical duties--such as developers, engineers, and data scientists---are often implicit, subtle, and under-scrutinized in HCI and related fields. We interviewed 22 technically oriented ML practitioners across seven domains to understand the characteristics of their methodological approaches to RML through the lens of ideal and non-ideal theorizing of fairness. We find that practitioners’ methodological approaches fall along a spectrum of idealization. While they structured their approaches through ideal theorizing, such as by abstracting RML workflow from the inquiry of applicability of ML, they did not systematically document nor pay deliberate attention to their non-ideal approaches, such as diagnosing imperfect conditions. We end our paper with a discussion of a new methodological approach, inspired by elements of non-ideal theory, to structure technical practitioners’ RML process and facilitate collaboration with other stakeholders.

著者
Ramaravind Kommiya Mothilal
University of Toronto, Toronto, Ontario, Canada
Shion Guha
University of Toronto, Toronto, Ontario, Canada
Syed Ishtiaque Ahmed
University of Toronto, Toronto, Ontario, Canada
論文URL

doi.org/10.1145/3613904.3642501

動画
(Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in Court
要旨

Accountable use of AI systems in high-stakes settings relies on making systems contestable. In this paper we study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court. We present findings from interviews with 17 people in the U.S. public defense community to understand their perceptions of and experiences scrutinizing computational forensic software (CFS) --- automated decision systems that the government uses to convict and incarcerate, such as facial recognition, gunshot detection, and probabilistic genotyping tools. We find that our participants faced challenges assessing and contesting CFS reliability due to difficulties (a) navigating how CFS is developed and used, (b) overcoming judges and jurors’ non-critical perceptions of CFS, and (c) gathering CFS expertise. To conclude, we provide recommendations that center the technical, social, and institutional context to better position interventions such as performance evaluations to support contestability in practice.

著者
Angela Jin
University of California, Berkeley, Berkeley, California, United States
Niloufar Salehi
UC, Berkeley, Berkeley, California, United States
論文URL

doi.org/10.1145/3613904.3641902

動画
“The bus is nothing without us”: Making Visible the Labor of Bus Operators amid the Ongoing Push Towards Transit Automation
要旨

This paper describes how the complexity of circumstances bus operators manage presents unique challenges to the feasibility of high-level automation in public transit. Avoiding an overly rationalized view of bus operators' labor is critical to ensure the introduction of automation technologies does not compromise public wellbeing, the dignity of transit workers, or the integrity of critical public infrastructure. Our findings from a group interview study show that bus operators take on work — undervalued by those advancing automation technologies — to ensure the well-being of passengers and community members. Notably, bus operators are positioned to function as shock absorbers during social crises in their communities and in moments of technological breakdown as new systems come on board. These roles present a critical argument against the rapid push toward driverless automation in public transit. We conclude by identifying opportunities for participatory design and collaborative human-machine teaming for a more just future of transit.

著者
Hunter Akridge
Carnegie Mellon University, Pittsbrugh, Pennsylvania, United States
Bonnie Fan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Alice Xiaodi Tang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chinar Mehta
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Nikolas Martelaro
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sarah E. Fox
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642714

動画
Care-Based Eco-Feedback Augmented with Generative AI: Fostering Pro-Environmental Behavior through Emotional Attachment
要旨

Lights out! With the escalating climate crisis, eco-feedback has gained prominence over the last decade. However, traditional approaches could be underperforming as they often use data-driven strategies and assume that people only need additional information about their consumption to change behavior. A proposed path to overcome this issue is to design eco-feedback to foster emotional connections with users. However, not much is known about the effectiveness of such designs. In this paper, we propose a novel care-based eco-feedback system. Central to the system is a Tamagotchi-inspired digital character named INFI who gets its life force from the user's energy savings. Additionally, we harness the latest advancements in generative artificial intelligence to enhance emotional attachment through conversational interactions that users can have with INFI. The results of a randomized controlled experiment (N=420) convey the fact that this design increases emotional attachment, which in turn increases energy-saving behavior.

著者
Manon Berney
Institute for Information Management, Neuchâtel, Switzerland
Abdessalam Ouaazki
University of Neuchâtel, Neuchâtel, Switzerland
Vladimir Macko
University of Neuchâtel, Neuchâtel, Switzerland
Bruno Kocher
UniNe, Neuchâtel, Neuchâtel, Switzerland
Adrian Holzer
University of Neuchâtel, Neuchâtel, Switzerland
論文URL

doi.org/10.1145/3613904.3642296

動画
DeepTreeSketch: Neural Graph Prediction for Faithful 3D Tree Modeling from Sketches
要旨

We present DeepTreeSketch, a novel AI-assisted sketching system that enables users to create realistic 3D tree models from 2D freehand sketches. Our system leverages a tree graph prediction network, TGP-Net, to learn the underlying structural patterns of trees from a large collection of 3D tree models. The TGP-Net simulates the iterative growth of botanical trees and progressively constructs the 3D tree structures in a bottom-up manner. Furthermore, our system supports a flexible sketching mode for both precise and coarse control of the tree shapes by drawing branch strokes and foliage strokes, respectively. Combined with a procedural generation strategy, users can freely control the foliage propagation with diverse and fine details. We demonstrate the expressiveness, efficiency, and usability of our system through various experiments and user studies. Our system offers a practical tool for 3D tree creation, especially for natural scenes in games, movies, and landscape applications.

著者
Zhihao Liu
The University of Tokyo, Tokyo, Japan
Yu LI
Chinese Academy of Siences., Shenzhen, China
Fangyuan Tu
The Chinese University of Hong Kong, Hong Kong, China
Ruiyuan Zhang
The Chinese University of Hong Kong Shenzhen, Shenzhen, China
Zhanglin Cheng
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
Naoto Yokoya
The University of Tokyo, Tokyo, Japan
論文URL

doi.org/10.1145/3613904.3642125

動画
Amplifying Human Capabilities in Prostate Cancer Diagnosis: An Empirical Study of Current Practices and AI Potentials in Radiology
要旨

This paper examines the potential of Human-Centered AI (HCAI) solutions to support radiologists in diagnosing prostate cancer. Prostate cancer is one of the most prevalent and increasing cancers among men. The scarcity of radiologists raises concerns about their ability to address the growing demand for prostate cancer diagnosis, leading to a significant surge in the workload of radiologists. Drawing on an HCAI approach, we sought to understand the current practices concerning radiologists' work on detecting and diagnosing prostate cancer, as well as the challenges they face. The findings from our empirical studies point toward the potential that AI has to expedite informed decision-making and enhance accuracy, efficiency, and consistency. This is particularly beneficial for collaborative prostate cancer diagnosis processes. We discuss these results and introduce design recommendations and HCAI concepts for the domain of prostate cancer diagnosis, with the aim of amplifying the professional capabilities of radiologists.

著者
Sheree May Saßmannshausen
University of Siegen, Siegen, Germany
Nazmun Nisat. Ontika
University of Siegen, Siegen, North Rhine Westphalia, Germany
Aparecido Fabiano Pinatti de Carvalho
University of Oslo, Oslo, Norway
Mark Rouncefield
Lancaster University, Lancaster, United Kingdom
Volkmar Pipek
University of Siegen, Siegen, Germany
論文URL

doi.org/10.1145/3613904.3642362

動画
Data Ethics Emergency Drill: A Toolbox for Discussing Responsible AI for Industry Teams
要旨

Researchers urge technology practitioners such as data scientists to consider the impacts and ethical implications of algorithmic decisions. However, unlike programming, statistics, and data management, discussion of ethical implications is rarely included in standard data science training. To begin to address this gap, we designed and tested a toolbox called the data ethics emergency drill (DEED) to help data science teams discuss and reflect on the ethical implications of their work. The DEED is a roleplay of a fictional ethical emergency scenario that is contextually situated in the team’s specific workplace and applications. This paper outlines the DEED toolbox and describes three studies carried out with two different data science teams that iteratively shaped its design. Our findings show that practitioners can apply lessons learnt from the roleplay to real-life situations, and how the DEED opened up conversations around ethics and values.

著者
Vanessa Aisyahsari Hanschke
University of Bristol, Bristol, United Kingdom
Dylan Rees
LV= General Insurance, London, United Kingdom
Merve Alanyali
LV= General Insurance, London, United Kingdom
David Hopkinson
LV= General Insurance, London, United Kingdom
Paul Marshall
University of Bristol, Bristol, United Kingdom
論文URL

doi.org/10.1145/3613904.3642402

動画
JupyterLab in Retrograde: Contextual Notifications That Highlight Fairness and Bias Issues for Data Scientists
要旨

Current algorithmic fairness tools focus on auditing completed models, neglecting the potential downstream impacts of iterative decisions about cleaning data and training machine learning models. In response, we developed Retrograde, a JupyterLab environment extension for Python that generates real-time, contextual notifications for data scientists about decisions they are making regarding protected classes, proxy variables, missing data, and demographic differences in model performance. Our novel framework uses automated code analysis to trace data provenance in JupyterLab, enabling these notifications. In a between-subjects online experiment, 51 data scientists constructed loan-decision models with Retrograde providing notifications continuously throughout the process, only at the end, or never. Retrograde's notifications successfully nudged participants to account for missing data, avoid using protected classes as predictors, minimize demographic differences in model performance, and exhibit healthy skepticism about their models.

受賞
Best Paper
著者
Galen Harrison
University of Virginia, Charlottesville, Virginia, United States
Kevin Bryson
University of Chicago, Chicago, Illinois, United States
Ahmad Emmanuel Balla. Bamba
University of Chicago, Chicago, Illinois, United States
Luca Dovichi
University of Chicago, Chicago, Illinois, United States
Aleksander Herrmann. Binion
University of Chicago, Chicago, Illinois, United States
Arthur Borem
University of Chicago, Chicago, Illinois, United States
Blase Ur
University of Chicago, Chicago, Illinois, United States
論文URL

doi.org/10.1145/3613904.3642755

動画
Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public Services
要旨

Policymakers have established that the ability to contest decisions made by or with algorithms is core to responsible artificial intelligence (AI). However, there has been a disconnect between research on contestability of algorithms, and what the situated practice of contestation looks like in contexts across the world, especially amongst communities on the margins. We address this gap through a qualitative study of follow-up and contestation in accessing public services for land ownership in rural India and affordable housing in the urban United States. We find there are significant barriers to exercising rights and contesting decisions, which intermediaries like NGO workers or lawyers work with communities to address. We draw on the notion of accompaniment in global health to highlight the open-ended work required to support people in navigating violent social systems. We discuss the implications of our findings for key aspects of contestability, including building capacity for contestation, human review, and the role of explanations. We also discuss how sociotechnical systems of algorithmic decision-making can embody accompaniment by taking on a higher burden of preventing denials and enabling contestation.

著者
Naveena Karusala
Harvard University, Allston, Massachusetts, United States
Sohini Upadhyay
Harvard University, Allston, Massachusetts, United States
Rajesh Veeraraghavan
Georgetown University, Washington DC, District of Columbia, United States
Krzysztof Z.. Gajos
Harvard University, Allston, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3641898

動画
In-Between Visuals and Visible: The Impacts of Text-to-Image Generative AI Tools on Digital Image-making Practices in the Global South
要旨

This paper joins the growing body of HCI work on critical AI studies and focuses on the impact of Generative Artificial Intelligence (GAI) tools in Bangladesh. While the West has started to examine the limitations and risks associated with these tools, their impacts on the Global South have remained understudied. Based on our interviews, focus group discussions (FGD), and social media-based qualitative study, this paper reports how popular text-to-image GAI tools (ex., DALL-E, Midjourney, Stable Diffusion, Firefly) are affecting various image-related local creative fields. We report how these tools limit the creative explorations of marginal artists, struggle to understand linguistic nuances, fail to generate local forms of art and architecture, and misrepresent the diversity among citizens in the image production process. Drawing from a rich body of work on critical image theory, postcolonial computing, and design politics, we explain how our findings are pertinent to HCI's broader interest in social justice, decolonization, and global development.

著者
Nusrat Jahan Mim
Harvard University, Cambridge, Massachusetts, United States
Dipannita Nandi
Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
Sadaf Sumyia. Khan
Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh
Arundhuti Dey
Miami University, Oxford, Ohio, United States
Syed Ishtiaque Ahmed
University of Toronto, Toronto, Ontario, Canada
論文URL

doi.org/10.1145/3613904.3641951

動画
Towards Building Condition-Based Cross-Modality Intention-Aware Human-AI Cooperation under VR Environment
要旨

To address critical challenges in effectively identifying user intent and forming relevant information presentations and recommendations in VR environments, we propose an innovative condition-based multi-modal human-AI cooperation framework. It highlights the intent tuples (intent, condition, intent prompt, action prompt) and 2-Large-Language-Models (2-LLMs) architecture. This design, utilizes ``condition'' as the core to describe tasks, dynamically match user interactions with intentions, and empower generations of various tailored multi-modal AI responses. The architecture of 2-LLMs separates the roles of intent detection and action generation, decreasing the prompt length and helping with generating appropriate responses. We implemented a VR-based intelligent furniture purchasing system based on the proposed framework and conducted a three-phase comparative user study. The results conclusively demonstrate the system's superiority in time efficiency and accuracy, intention conveyance improvements, effective product acquisitions, and user satisfaction and cooperation preference. Our framework provides a promising approach towards personalized and efficient user experiences in VR.

著者
Ziyao He
Xi'an Jiaotong University, Xi'an, China
Shiyuan Li
Xi'an Jiaotong University, Xi'an, China
Yunpeng Song
Xi'an Jiaotong University, Xi'an, China
Zhongmin Cai
Xi’an Jiaotong University , Xi’an , China
論文URL

doi.org/10.1145/3613904.3642360

動画
MindTalker: Navigating the Complexities of AI-Enhanced Social Engagement for People with Early-Stage Dementia
要旨

People living with dementia are at risk of social isolation, and conversational AI agents can potentially support such individuals by reducing their loneliness. In our study, a conversational AI agent, called MindTalker, co-designed with therapists and utilizing the GPT-4 Large Language Model (LLM), was developed to support people with early-stage dementia, allowing them to experience a new type of “social relationship” that could be extended to real life. Eight PwD engaged with MindTalker for one month or even longer, and data was collected from interviews. Our findings emphasized that participants valued the novelty of AI, but sought more consistent, deeper interactions. They desired a personal touch from AI, while stressing the irreplaceable value of human interactions. The findings underscore the complexities of AI engagement dynamics, where participants commented on the artificial nature of AI, highlighting important insights into the future design of conversational AI for this population.

著者
Anna Xygkou
University of Kent, Canterbury, United Kingdom
Chee Siang Ang
University of Kent, Canterbury, KENT, United Kingdom
Panote Siriaraya
Kyoto Institute of Technology, Kyoto, Japan
Jonasz Piotr. Kopecki
Adama Mickiewicza w Poznaniu Collegium Maius, Poznań, Poland
Alexandra Covaci
University of Kent, Canterbury, Kent, United Kingdom
Eiman Kanjo
Nottingham Trent University, Nottingham, United Kingdom
Wan-Jou She
Nara institute of Science and Technology, Ikoma City, Nara, Japan
論文URL

doi.org/10.1145/3613904.3642538

動画