"Because AI is 100% right and safe": User Attitudes and Sources of AI Authority in India
説明

Most prior work on human-AI interaction is set in communities that indicate skepticism towards AI, but we know less about contexts where AI is viewed as aspirational. We investigated the perceptions around AI systems by drawing upon 32 interviews and 459 survey respondents in India. Not only do Indian users accept AI decisions (79.2% respondents indicate acceptance), we find a case of AI authority-- AI has a legitimized power to influence human actions, without requiring adequate evidence about the capabilities of the system. AI authority manifested into four user attitudes of vulnerability: faith, forgiveness, self-blame, and gratitude, pointing to higher tolerance for system misfires, and introducing potential for irreversible individual and societal harm. We urgently call for calibrating AI authority, reconsidering success metrics and responsible AI approaches and present methodological suggestions for research and deployments in India.

日本語まとめ
読み込み中…
読み込み中…
Still Creepy After All These Years: The Normalization of Affective Discomfort in App Use
説明

It is not well understood why people continue to use privacy-invasive apps they consider creepy. We conducted a scenario-based study (n = 751) to investigate how the intention to use an app is influenced by affective perceptions and privacy concerns. We show that creepiness is one facet of \emph{affective discomfort}, which is becoming normalized in app use. We found that affective discomfort can be negatively associated with the intention to use a privacy-invasive app. However, the influence is mitigated by other factors, including data literacy, views regarding app data practices, and ambiguity of the privacy threat. Our findings motivate a focus on affective discomfort when designing user experiences related to privacy-invasive data practices. Treating affective discomfort as a fundamental aspect of user experience requires scaling beyond the point where the thumb meets the screen and accounting for entrenched data practices and the sociotechnical landscape within which the practices are embedded.

日本語まとめ
読み込み中…
読み込み中…
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making
説明

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

日本語まとめ
読み込み中…
読み込み中…
AI-Moderated Decision-Making: Capturing and Balancing Anchoring Bias in Sequential Decision Tasks
説明

Decision-making involves biases from past experiences, which are difficult to perceive and eliminate. We investigate a specific type of anchoring bias, in which decision-makers are anchored by their own recent decisions, e.g. a college admission officer sequentially reviewing students.

We propose an algorithm that identifies existing anchored decisions, reduces sequential dependencies to previous decisions, and mitigates decision inaccuracies post-hoc with 2% increased agreement to ground-truth on a large-scale college admission decision data set. A crowd-sourced study validates this algorithm on product preferences (5% increased agreement).

To avoid biased decisions ex-ante, we propose a procedure that presents instances in an order that reduces anchoring bias in real-time. Tested in another crowd-sourced study, it reduces bias and increases agreement to ground-truth by 7%. Our work reinforces individuals with similar characteristics to be treated similarly, independent of when they were reviewed in the decision-making process.

日本語まとめ
読み込み中…
読み込み中…
How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions
説明

Machine learning tools have been deployed in various contexts to support human decision-making, in the hope that human-algorithm collaboration can improve decision quality. However, the question of whether such collaborations reduce or exacerbate biases in decision-making remains underexplored. In this work, we conducted a mixed-methods study, analyzing child welfare call screen workers' decision-making over a span of four years, and interviewing them on how they incorporate algorithmic predictions into their decision-making process. Our data analysis shows that, compared to the algorithm alone, call screen workers reduced the disparity in screen-in rate between Black and white children from 20\% to 9\%. Our qualitative data show that workers achieved this by making holistic risk assessments and complementing the algorithm's limitations. These results shed light on potential mechanisms for improving human-algorithm collaboration in high-risk decision-making contexts.

日本語まとめ
読み込み中…
読み込み中…