Humans and Machines

会議の名前
CHI 2023
Conceptualizing Algorithmic Stigmatization
要旨

Algorithmic systems have infiltrated many aspects of our society, mundane to high-stakes, and can lead to algorithmic harms known as representational and allocative. In this paper, we consider what stigma theory illuminates about mechanisms leading to algorithmic harms in algorithmic assemblages. We apply the four stigma elements (i.e., labeling, stereotyping, separation, status loss/discrimination) outlined in sociological stigma theories to algorithmic assemblages in two contexts : 1) "risk prediction" algorithms in higher education, and 2) suicidal expression and ideation detection on social media. We contribute the novel theoretical conceptualization of algorithmic stigmatization as a sociotechnical mechanism that leads to a unique kind of algorithmic harm: algorithmic stigma. Theorizing algorithmic stigmatization aids in identifying theoretically-driven points of intervention to mitigate and/or repair algorithmic stigma. While prior theorizations reveal how stigma governs socially and spatially, this work illustrates how stigma governs sociotechnically.

著者
Nazanin Andalibi
University of Michigan, Ann Arbor, Michigan, United States
Cassidy Pyle
University of Michigan, Ann Arbor, Michigan, United States
Kristen Barta
University of Michigan, Ann Arbor, Michigan, United States
Lu Xian
University of Michigan, Ann Arbor, Michigan, United States
Abigail Z. Jacobs
University of Michigan, Ann Arbor, Michigan, United States
Mark S.. Ackerman
University of Michigan, Ann Arbor, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3544548.3580970

動画
`Specially For You' -- Examining the Barnum Effect's Influence on the Perceived Quality of System Recommendations
要旨

The ‘Barnum effect’ is a psychological phenomenon under which people assign higher quality ratings to personality descriptions developed ‘specially for you’ than the same descriptions described as ‘generally true of people.’ This effect suggests that recommender interfaces could elevate the perceived quality of recommendations simply by indicating that they are explicitly personalised. We therefore conducted a crowd-sourced experiment (n=492) that examined the perceived quality of personalised versus non-personalised movie recommendations for good and bad movies – importantly, the actual recommendations were identical, and were merely presented as being either personalised or not. Contrary to the Barnum effect, results showed numerically lower mean quality scores for personalised recommendations, but with no significant difference. Our findings suggest that Barnum-like effects of personalisation have at most a small influence on perceived quality, and that designers should not rely on this effect to improve user experience (despite online design guidance suggesting the opposite).

著者
Pang Suwanaposee
University of Canterbury, Christchurch, New Zealand
Carl Gutwin
University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Zhe Chen
University of Canterbury, Christchurch, New Zealand
Andy Cockburn
University of Canterbury, Christchurch, New Zealand
論文URL

https://doi.org/10.1145/3544548.3580656

動画
Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice
要旨

Recent years have seen growing interest among both researchers and practitioners in user-engaged approaches to algorithm auditing, which directly engage users in detecting problematic behaviors in algorithmic systems. However, we know little about industry practitioners' current practices and challenges around user-engaged auditing, nor what opportunities exist for them to better leverage such approaches in practice. To investigate, we conducted a series of interviews and iterative co-design activities with practitioners who employ user-engaged auditing approaches in their work. Our findings reveal several challenges practitioners face in appropriately recruiting and incentivizing user auditors, scaffolding user audits, and deriving actionable insights from user-engaged audit reports. Furthermore, practitioners shared organizational obstacles to user-engaged auditing, surfacing a complex relationship between practitioners and user auditors. Based on these findings, we discuss opportunities for future HCI research to help realize the potential (and mitigate risks) of user-engaged auditing in industry practice.

著者
Wesley Hanwen. Deng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Boyuan Guo
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Alicia DeVrio
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hong Shen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Motahhare Eslami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Kenneth Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3581026

動画
Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm
要旨

Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people's reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people's reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople's reactions to AI-caused harm.

著者
Gabriel Lima
KAIST, Daejeon, Korea, Republic of
Nina Grgić-Hlača
Max Planck Institute for Software Systems, Saarbrücken, Germany
Meeyoung Cha
Institute for Basic Science (IBS), Daejeon, Korea, Republic of
論文URL

https://doi.org/10.1145/3544548.3580953

動画
Do You Mind? User Perceptions of Machine Consciousness
要旨

The prospect of machine consciousness cultivates controversy across media, academia, and industry. Assessing whether non-experts perceive technologies as conscious, and exploring the consequences of this perception, are yet unaddressed challenges in Human Computer Interaction (HCI). To address them, we surveyed 100 people, exploring their conceptualisations of consciousness and if and how they perceive consciousness in currently available interactive technologies. We show that many people already perceive a degree of consciousness in GPT-3, a voice chat bot, and a robot vacuum cleaner. Within participant responses we identified dynamic tensions between denial and speculation, thinking and feeling, interaction and experience, control and independence, and rigidity and spontaneity. These tensions can inform future research into perceptions of machine consciousness and the challenges it represents for HCI. With both empirical and theoretical contributions, this paper emphasises the importance of HCI in an era of machine consciousness, real, perceived or denied.

著者
Ava Elizabeth. Scott
UCL, London, London, United Kingdom
Daniel Neumann
University of St Gallen, St Gallen, Switzerland
Jasmin Niess
University of St. Gallen, St. Gallen, Switzerland
Paweł W. Woźniak
Chalmers University of Technology, Gothenburg, Sweden
論文URL

https://doi.org/10.1145/3544548.3581296

動画
How does HCI Understand Human Agency and Autonomy?
要旨

Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities.

著者
Dan Bennett
University of Bristol, Bristol, Bristol, United Kingdom
Oussama Metatla
University of Bristol, Bristol, United Kingdom
Anne Roudaut
University of Bristol, Bristol, United Kingdom
Elisa D.. Mekler
Aalto University, Espoo, Finland
論文URL

https://doi.org/10.1145/3544548.3580651

動画