Bias and Ethics

会議の名前
CHI 2022
"Because AI is 100% right and safe": User Attitudes and Sources of AI Authority in India
要旨

Most prior work on human-AI interaction is set in communities that indicate skepticism towards AI, but we know less about contexts where AI is viewed as aspirational. We investigated the perceptions around AI systems by drawing upon 32 interviews and 459 survey respondents in India. Not only do Indian users accept AI decisions (79.2% respondents indicate acceptance), we find a case of AI authority-- AI has a legitimized power to influence human actions, without requiring adequate evidence about the capabilities of the system. AI authority manifested into four user attitudes of vulnerability: faith, forgiveness, self-blame, and gratitude, pointing to higher tolerance for system misfires, and introducing potential for irreversible individual and societal harm. We urgently call for calibrating AI authority, reconsidering success metrics and responsible AI approaches and present methodological suggestions for research and deployments in India.

著者
Shivani Kapania
Google Research, Bengaluru, India
Oliver Siy
Google Research, Seattle, Washington, United States
Gabe Clapper
Google Research, Seattle, Washington, United States
Azhagu Meena SP
Google Research, Bengaluru, India
Nithya Sambasivan
Google Research, Bengaluru, India
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517533

動画
Still Creepy After All These Years: The Normalization of Affective Discomfort in App Use
要旨

It is not well understood why people continue to use privacy-invasive apps they consider creepy. We conducted a scenario-based study (n = 751) to investigate how the intention to use an app is influenced by affective perceptions and privacy concerns. We show that creepiness is one facet of \emph{affective discomfort}, which is becoming normalized in app use. We found that affective discomfort can be negatively associated with the intention to use a privacy-invasive app. However, the influence is mitigated by other factors, including data literacy, views regarding app data practices, and ambiguity of the privacy threat. Our findings motivate a focus on affective discomfort when designing user experiences related to privacy-invasive data practices. Treating affective discomfort as a fundamental aspect of user experience requires scaling beyond the point where the thumb meets the screen and accounting for entrenched data practices and the sociotechnical landscape within which the practices are embedded.

受賞
Best Paper
著者
John S.. Seberger
Michigan State University, East Lansing, Michigan, United States
Irina Shklovski
University of Copenhagen, Copenhagen, Denmark
Emily Swiatek
Indiana University Bloomington, Bloomington, Indiana, United States
Sameer Patil
University of Utah, Salt Lake City, Utah, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502112

動画
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making
要旨

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

受賞
Honorable Mention
著者
Suzanne Tolmeijer
University of Zurich, Zurich, Switzerland
Markus Christen
University of Zurich, Zurich, Zürich, Switzerland
Serhiy Kandul
University of Zurich, Zurich, Switzerland
Markus Kneer
University of Zurich, Zurich, Switzerland
Abraham Bernstein
University of Zurich, Zurich, Zurich, Switzerland
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517732

動画
AI-Moderated Decision-Making: Capturing and Balancing Anchoring Bias in Sequential Decision Tasks
要旨

Decision-making involves biases from past experiences, which are difficult to perceive and eliminate. We investigate a specific type of anchoring bias, in which decision-makers are anchored by their own recent decisions, e.g. a college admission officer sequentially reviewing students. We propose an algorithm that identifies existing anchored decisions, reduces sequential dependencies to previous decisions, and mitigates decision inaccuracies post-hoc with 2% increased agreement to ground-truth on a large-scale college admission decision data set. A crowd-sourced study validates this algorithm on product preferences (5% increased agreement). To avoid biased decisions ex-ante, we propose a procedure that presents instances in an order that reduces anchoring bias in real-time. Tested in another crowd-sourced study, it reduces bias and increases agreement to ground-truth by 7%. Our work reinforces individuals with similar characteristics to be treated similarly, independent of when they were reviewed in the decision-making process.

著者
Jessica Maria Echterhoff
University of California, San Diego, La Jolla, California, United States
Matin Yarmand
University of California, San Diego, La Jolla, California, United States
Julian McAuley
University of California, San Diego, La Jolla, California, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517443

動画
How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions
要旨

Machine learning tools have been deployed in various contexts to support human decision-making, in the hope that human-algorithm collaboration can improve decision quality. However, the question of whether such collaborations reduce or exacerbate biases in decision-making remains underexplored. In this work, we conducted a mixed-methods study, analyzing child welfare call screen workers' decision-making over a span of four years, and interviewing them on how they incorporate algorithmic predictions into their decision-making process. Our data analysis shows that, compared to the algorithm alone, call screen workers reduced the disparity in screen-in rate between Black and white children from 20\% to 9\%. Our qualitative data show that workers achieved this by making holistic risk assessments and complementing the algorithm's limitations. These results shed light on potential mechanisms for improving human-algorithm collaboration in high-risk decision-making contexts.

著者
Hao-Fei Cheng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Logan Stapleton
University of Minnesota, Minneapolis, Minnesota, United States
Anna Kawakami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Venkatesh Sivaraman
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yanghuidi Cheng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Diana Qing
University of California, Berkeley, Berkeley, California, United States
Adam Perer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Kenneth Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Zhiwei Steven Wu
Carnegie Mellon University , Pittsburgh, Pennsylvania, United States
Haiyi Zhu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501831

動画