Algorithmic systems have infiltrated many aspects of our society, mundane to high-stakes, and can lead to algorithmic harms known as representational and allocative. In this paper, we consider what stigma theory illuminates about mechanisms leading to algorithmic harms in algorithmic assemblages. We apply the four stigma elements (i.e., labeling, stereotyping, separation, status loss/discrimination) outlined in sociological stigma theories to algorithmic assemblages in two contexts : 1) "risk prediction" algorithms in higher education, and 2) suicidal expression and ideation detection on social media. We contribute the novel theoretical conceptualization of algorithmic stigmatization as a sociotechnical mechanism that leads to a unique kind of algorithmic harm: algorithmic stigma. Theorizing algorithmic stigmatization aids in identifying theoretically-driven points of intervention to mitigate and/or repair algorithmic stigma. While prior theorizations reveal how stigma governs socially and spatially, this work illustrates how stigma governs sociotechnically.
https://doi.org/10.1145/3544548.3580970
The ‘Barnum effect’ is a psychological phenomenon under which people assign higher quality ratings to personality descriptions developed ‘specially for you’ than the same descriptions described as ‘generally true of people.’ This effect suggests that recommender interfaces could elevate the perceived quality of recommendations simply by indicating that they are explicitly personalised. We therefore conducted a crowd-sourced experiment (n=492) that examined the perceived quality of personalised versus non-personalised movie recommendations for good and bad movies – importantly, the actual recommendations were identical, and were merely presented as being either personalised or not. Contrary to the Barnum effect, results showed numerically lower mean quality scores for personalised recommendations, but with no significant difference. Our findings suggest that Barnum-like effects of personalisation have at most a small influence on perceived quality, and that designers should not rely on this effect to improve user experience (despite online design guidance suggesting the opposite).
https://doi.org/10.1145/3544548.3580656
Recent years have seen growing interest among both researchers and practitioners in user-engaged approaches to algorithm auditing, which directly engage users in detecting problematic behaviors in algorithmic systems. However, we know little about industry practitioners' current practices and challenges around user-engaged auditing, nor what opportunities exist for them to better leverage such approaches in practice. To investigate, we conducted a series of interviews and iterative co-design activities with practitioners who employ user-engaged auditing approaches in their work. Our findings reveal several challenges practitioners face in appropriately recruiting and incentivizing user auditors, scaffolding user audits, and deriving actionable insights from user-engaged audit reports. Furthermore, practitioners shared organizational obstacles to user-engaged auditing, surfacing a complex relationship between practitioners and user auditors. Based on these findings, we discuss opportunities for future HCI research to help realize the potential (and mitigate risks) of user-engaged auditing in industry practice.
https://doi.org/10.1145/3544548.3581026
Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people's reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people's reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople's reactions to AI-caused harm.
https://doi.org/10.1145/3544548.3580953
The prospect of machine consciousness cultivates controversy across media, academia, and industry. Assessing whether non-experts perceive technologies as conscious, and exploring the consequences of this perception, are yet unaddressed challenges in Human Computer Interaction (HCI). To address them, we surveyed 100 people, exploring their conceptualisations of consciousness and if and how they perceive consciousness in currently available interactive technologies. We show that many people already perceive a degree of consciousness in GPT-3, a voice chat bot, and a robot vacuum cleaner. Within participant responses we identified dynamic tensions between denial and speculation, thinking and feeling, interaction and experience, control and independence, and rigidity and spontaneity. These tensions can inform future research into perceptions of machine consciousness and the challenges it represents for HCI. With both empirical and theoretical contributions, this paper emphasises the importance of HCI in an era of machine consciousness, real, perceived or denied.
https://doi.org/10.1145/3544548.3581296
Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities.
https://doi.org/10.1145/3544548.3580651