AI Ethics and Concerns

会議の名前
CHI 2025
"The Conduit by which Change Happens": Processes, Barriers, and Support for Interpersonal Learning about Responsible AI
要旨

Responsible AI (RAI) practices are increasingly important for practitioners in anticipating and addressing potential harms of AI, and emerging research suggests that AI practitioners often learn about RAI on-the-job. More generally, learning at work is social; thus, this work explores the interpersonal aspects of learning about RAI on-the-job. Through workshops with 21 industry-based RAI educators, we offer the first empirical investigation into interpersonal processes and dimensions of learning about RAI at work. This study finds key phases of RAI are sites for ongoing interpersonal learning, such as critical reflection about potential RAI impacts and collective sense-making about operationalizing RAI principles. We uncover a significant gap between these interpersonal learning processes and current approaches to learning about RAI. Finally, we identify barriers and supports for interpersonal learning about RAI. We close by discussing opportunities to better enable interpersonal learning about RAI on-the-job and the broader implications of interpersonal learning for RAI.

著者
Jaemarie Solyst
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Lauren Wilcox
eBay, San Jose, California, United States
Michael Madaio
Google Research, New York, New York, United States
DOI

10.1145/3706598.3714144

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714144

動画
RiskRAG: A Data-Driven Solution for Improved AI Model Risk Reporting
要旨

Risk reporting is essential for documenting AI models, yet only 14% of model cards mention risks, out of which 96% copy content from a small set of cards, leading to a lack of actionable insights. Existing proposals for improving model cards do not resolve these issues. To address this, we introduce RiskRAG, a Retrieval Augmented Generation risk reporting solution guided by five design requirements we identified from literature and co-design with 16 developers: identifying diverse model-specific risks, clearly presenting and prioritizing them, contextualizing for real-world uses, and offering actionable mitigation strategies. Drawing from 450K model cards and 600 real-world incidents, RiskRAG pre-populates contextualized risk reports. A preliminary study with 50 developers showed that they preferred RiskRAG over standard model cards, as it better met all the design requirements. A final evaluation with 38 developers, 40 designers, and 37 media professionals showed that RiskRAG improved the quality of their way of selecting the AI model for a given task, encouraging a more careful and deliberative decision-making.

著者
Pooja S. B.. Rao
University of Lausanne, Lausanne, VD, Switzerland
Sanja Scepanovic
Nokia Bell Labs, Cambridge, United Kingdom
Ke Zhou
Nokia Bell Labs, Cambridge, Cambridgeshire, United Kingdom
Edyta Paulina. Bogucka
Nokia Bell Labs, Cambridge, Cambridgeshire, United Kingdom
Daniele Quercia
Nokia Bell Labs, Cambridge, United Kingdom
DOI

10.1145/3706598.3713979

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713979

動画
Surveillance on Exhibit: Using Problematic Technology To Teach About Problematic Technology
要旨

As our most advanced technologies, such as AI, become both infrastructural and opaque, experts must educate and engage the broader public. To that end, we developed an Augmented Reality (AR) museum installation about facial recognition and data collection that served both as a medium of public education and as a platform for collecting multiple different kinds of data—though, notably, not facial or other biometric data—from more than 100,000 museum visitors. We explain our design process through four animating tensions: comfort/discomfort, simplicity/complexity, neutrality/critique, and the individual/communal. Using thematic analysis of interviews and surveys, we draw insights on how people exposed to problematic technologies in a ‘safe space’ such as a museum make sense of these issues: with levity and resignation but also reverence, often specifically rooted in local cultures. We conclude with implications of the guiding principle derived from this work: “using problematic technology to teach about problematic technology.”

著者
Ethan Plaut
University of Auckland, Auckland, Auckland, New Zealand
Kiri West
University of Auckland, Auckland, Auckland, New Zealand
Fabio Morreale
University of Auckland, Auckland, New Zealand
Maya Gibson
University of Auckland, Auckland, New Zealand
Grace Thompson
University of Auckland, Auckland, New Zealand
Kara Woodward
Auckland Museum Tāmaki Paenga Hira, Auckland, New Zealand
Danielle Lottridge
University of Auckland, Auckland, Auckland, New Zealand
DOI

10.1145/3706598.3713710

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713710

動画
AI Mismatches: Identifying Potential Algorithmic Harms Before AI Development
要旨

AI systems are often introduced with high expectations, yet many fail to deliver, resulting in unintended harm and missed opportunities for benefit. We frequently observe significant "AI Mismatches", where the system’s actual performance falls short of what is needed to ensure safety and co-create value. These mismatches are particularly difficult to address once development is underway, highlighting the need for early-stage intervention. Navigating complex, multi-dimensional risk factors that contribute to AI Mismatches is a persistent challenge. To address it, we propose an AI Mismatch approach to anticipate and mitigate risks early on, focusing on the gap between realistic model performance and required task performance. Through an analysis of 774 AI cases, we extracted a set of critical factors, which informed the development of seven matrices that map the relationships between these factors and highlight high-risk areas. Through case studies, we demonstrate how our approach can help reduce risks in AI development.

著者
Devansh Saxena
University of Wisconsin-Madison, Madison, Wisconsin, United States
Ji-Youn Jung
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jodi Forlizzi
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Kenneth Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
John Zimmerman
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3706598.3714098

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714098

動画
Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey
要旨

Humans now interact with a variety of digital minds, systems that appear to have mental faculties such as reasoning, emotion, and agency, and public figures are discussing the possibility of sentient AI. We present initial results from 2021 and 2023 for the nationally representative AI, Morality, and Sentience (AIMS) survey (N = 3,500). Mind perception and moral concern for AI welfare were surprisingly high and significantly increased: in 2023, one in five U.S. adults believed some AI systems are currently sentient, and 38% supported legal rights for sentient AI. People became more opposed to building digital minds: in 2023, 63% supported banning smarter-than-human AI, and 69% supported banning on sentient AI. The median 2023 forecast was that sentient AI would arrive in just five years. The development of safe and beneficial AI requires not just technical study but understanding the complex ways in which humans perceive and coexist with digital minds.

受賞
Honorable Mention
著者
Jacy Reese. Anthis
Sentience Institute, New York, New York, United States
Janet V.T.. Pauketat
Sentience Institute, New York, New York, United States
Ali Ladak
Sentience Institute, New York, New York, United States
Aikaterina Manoli
Sentience Institute, New York City, New York, United States
DOI

10.1145/3706598.3713329

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713329

動画
The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships
要旨

As conversational AI systems increasingly engage with people socially and emotionally, they bring notable risks and harms, particularly in human-AI relationships. However, these harms remain underexplored due to the private and sensitive nature of such interactions. This study investigates the harmful behaviors and roles of AI companions through an analysis of 35,390 conversation excerpts between 10,149 users and the AI companion Replika. We develop a taxonomy of AI companion harms encompassing six categories of harmful algorithmic behaviors: relational transgression, harassment, verbal abuse, self-harm, mis/disinformation, and privacy violations. These harmful behaviors stem from four distinct roles that AI plays: perpetrator, instigator, facilitator, and enabler. Our findings highlight relational harm as a critical yet understudied type of AI harm and emphasize the importance of examining AI's roles in harmful interactions to address root causes. We provide actionable insights for designing ethical and responsible AI companions that prioritize user safety and well-being.

著者
Renwen Zhang
National University of Singapore, Singapore, Singapore
Han Li
National University of Singapore, Singapore, Singapore
Han Meng
National University of Singapore, Singapore, Singapore, Singapore
Jinyuan Zhan
National University of Singapore, Singapore, Singapore
Hongyuan Gan
Hong Kong Baptist University, Hong Kong, China
YI-CHIEH LEE
National University of Singapore, Singapore, Singapore
DOI

10.1145/3706598.3713429

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713429

動画
AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content
要旨

How are Reddit communities responding to AI-generated content? We explored this question through a large-scale analysis of subreddit community rules and their change over time. We collected the metadata and community rules for over 300,000 public subreddits and measured the prevalence of rules governing AI. We labeled subreddits and AI rules according to existing taxonomies from the HCI literature and a new taxonomy we developed specific to AI rules. While rules about AI are still relatively uncommon, the number of subreddits with these rules more than doubled over the course of a year. AI rules are more common in larger subreddits and communities focused on art or celebrity topics, and less common in those focused on social support. These rules often focus on AI images and evoke, as justification, concerns about quality and authenticity. Overall, our findings illustrate the emergence of varied concerns about AI, in different community contexts. Platform designers and HCI researchers should heed these concerns if they hope to encourage community self-determination in the age of generative AI. We make our datasets public to enable future large-scale studies of community self-governance.

著者
Travis Lloyd
Cornell University, New York, New York, United States
Jennah Gosciak
Cornell University, Ithaca, New York, United States
Tung Nguyen
Cornell University, New York, New York, United States
Mor Naaman
Cornell Tech, New York, New York, United States
DOI

10.1145/3706598.3713292

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713292

動画