Critical Reflections on AI

会議の名前
CHI 2026
AI as We Describe It: How Large Language Models and Their Applications in Health are Represented Across Channels of Public Discourse
要旨

Representation shapes public attitudes and behaviors. With the recent advances and rapid adoption of LLMs, the way these systems are introduced will negotiate societal expectations for their role in high-stakes domains like health. Yet it remains unclear whether current narratives present a balanced view. We analyzed five prominent discourse channels (news, research press, YouTube, TikTok, and Reddit) over a two-year period on lexical style, informational content, and symbolic representation. Discussions were generally positive and episodic, with positivity increasing over time. Risk communication was unthorough and often reduced to information quality incidents, while explanations of LLMs' generative nature were rare. Compared with professional outlets, TikTok and Reddit highlighted wellbeing applications and showed greater variations in tone and anthropomorphism but little attention to risks. We discuss implications for public discourse as a diagnostic tool in identifying literacy and governance gaps, and for communication and design strategies to support more informed LLM engagement.

著者
Jiawei Zhou
Georgia Institute of Technology, Atlanta, Georgia, United States
Lei Zhang
Georgia Institute of Technology, Atlanta, Georgia, United States
Mei Li
Georgia Institute of Technology, Atlanta, Georgia, United States
Benjamin D. Horne
University of Tennessee Knoxville, Knoxville, Tennessee, United States
Munmun De Choudhury
Georgia Institute of Technology, Atlanta, Georgia, United States
LLMs Homogenize Values in Constructive Arguments on Value-Laden Topics
要旨

Large language models (LLMs) are increasingly used to promote prosocial and constructive discourse online. Yet little is known about how these models negotiate and shape underlying values when reframing people's arguments on value-laden topics. We conducted experiments with 465 participants from India and the United States, who wrote comments on homophobic and Islamophobic threads, and reviewed human-written and LLM-rewritten constructive versions of these comments. Our analysis shows that LLM systematically diminishes Conservative values while elevating prosocial values such as Benevolence and Universalism. When these comments were read by others, participants opposing same-sex marriage or Islam found human-written comments more aligned with their values, whereas those supportive of these communities found LLM-rewritten versions more aligned with their values. These findings suggest that value homogenization in LLM-mediated prosocial discourse runs the risk of marginalizing conservative viewpoints on value-laden topics and may inadvertently shape the dynamics of online discourse.

著者
Farhana Shahid
Cornell University, Ithaca, New York, United States
Stella Zhang
Cornell University, Ithaca, New York, United States
Aditya Vashistha
Cornell University, Ithaca, New York, United States
Investigating Writing Professionals' Relationships with GenAI: How Combined Perceptions of Rivalry and Collaboration Shape Work Practices and Outcomes
要旨

This study investigates how professional writers' complex relationship with GenAI shapes their work practices and outcomes. Through a cross-sectional survey with writing professionals (n=403) in diverse roles, we show that collaboration and rivalry orientation are associated with differences in work practices and outcomes. Rivalry is primarily associated with relational crafting and skill maintenance. Collaboration is primarily associated with task crafting, productivity, and satisfaction, at the cost of long-term skill deterioration. Combination of the orientations (high rivalry and high collaboration) reconciles these differences, while boosting the association with the outcomes. Our findings argue for a balanced approach where high levels of rivalry and collaboration are essential to shape work practices and generate outcomes aimed at the long-term success of the job. We present key design implications on how to increase friction (rivalry) and reduce over-reliance (collaboration) to achieve a more balanced relationship with GenAI.

著者
Rama Adithya Varanasi
New York University, New York City, New York, United States
Oded Nov
New York University, New York, New York, United States
Batia Mishan. Wiesenfeld
New York University, New York, New York, United States
When Generative AI Is Intimate, Sexy, and Violent: Examining Not-Safe-For-Work (NSFW) Chatbots on FlowGPT
要旨

User-created chatbots powered by generative AI offer new ways to share and interact with Not-Safe-For-Work (NSFW) content. However, little is known about the characteristics of these GenAI-based chatbots and their user interactions. Drawing on the functional theory of NSFW on social media, this study analyzes 376 NSFW chatbots and 307 public conversation sessions on FlowGPT. Findings identify four chatbot types: roleplay characters, story generators, image generators, and do-anything-now bots. AI Characters portraying fantasy personas and enabling hangout-style interactions are most common, often using explicit avatar images to invite engagement. Sexual, violent, and insulting content appears in both user prompts and chatbot outputs, with some chatbots generating explicit material even when users do not create erotic prompts. In sum, the NSFW experience on FlowGPT can be understood as a combination of virtual intimacy, sexual delusion, violent thought expression, and unsafe content acquisition. We conclude with implications for chatbot design, creator support, user safety, and content moderation.

著者
Xian Li
Southern University of Science and Technology, Shenzhen, Guangdong, China
Yuanning Han
The New School, New York, New York, United States
Di Liu
Southern University of Science and Technology, Shenzhen, China
Pengcheng An
Southern University of Science and Technology, Shenzhen, China
Shuo Niu
Clark University, Worcester, Massachusetts, United States
Certified AI System = Trustworthy? Exploring Expert and Lay User Perceptions and Needs Regarding AI Certification
要旨

AI certification has emerged as a promising mechanism to enhance transparency, accountability, and public trust. However, end-user perspectives remain largely unexplored. This study investigates two groups with differing AI expertise. Through qualitative interviews with 30 participants (15 experts, 15 lay users), we examined how AI certification influences trust, who should conduct it, transparency needs, post-certification monitoring, and certification fraud. Results reveal key differences between the two groups. Lay users perceive AI certification more positively than experts. Both groups prefer independent certifiers, with experts being more open to certification by private companies. Experts favor post-certification monitoring tied to system updates, whereas lay users prefer annual checks. Both groups value transparency, but the specific details they require differ. Regarding fraudulent AI certification, experts emphasize technical safeguards, while lay users focus on legal enforcement. The study discusses the implications of its findings and offers several recommendations for improving AI certification schemes.

著者
Sarah Abdelwahab Gaballah
Ruhr University Bochum, Bochum, Germany
Nur Efsan Cetinkaya
University of Duisburg-Essen, Essen, Germany
Magdalena Wischnewski
University of Duisburg-Essen, Essen, Germany
Martina Sasse
Ruhr University Bochum, Bochum, Germany
Human-AI Interaction for Time-Critical Sensemaking in Missing Persons Investigations
要旨

Every year an estimated 200,000 people go missing in the UK alone. Missing persons investigations involve challenging time-critical sensemaking tasks based on fragmented data sources. This paper describes a mixed-methods participatory study evaluating data science and AI-driven techniques (summarisation, fact extraction, and data visualisation) for supporting these investigations as part of a human-centered workflow. A series of human-AI interfaces were iteratively designed and tested with search officers and domain experts at Police Scotland. Based on findings, we describe: (1) user and information needs for missing persons investigations; (2) insights on the benefits and challenges of applying LLM-based techniques in high-risk contexts; and (3) lessons for integrating AI for sensemaking tasks in policing more broadly. We highlight that in high-stakes contexts, where accuracy and context-sensitivity are paramount, AI techniques must be balanced with other approaches and designed in close partnership with end-users.

著者
Pola Zuzanna. Labedzka
University of Cambridge, Cambridge, United Kingdom
Dorian Peters
Imperial College London, London, United Kingdom
John J. Dudley
University of Cambridge, Cambridge, United Kingdom
Miri Zilka
University of Cambridge , Cambridge, United Kingdom
動画
When Life Gives You AI, Will You Turn It Into A Market for Lemons? Understanding How Information Asymmetries About AI System Capabilities Affect Market Outcomes and Adoption
要旨

AI consumer markets are characterized by severe buyer-supplier market asymmetries. Complex AI systems can appear highly accurate while making costly errors or embedding hidden defects. While there have been regulatory efforts surrounding different forms of disclosure, large information gaps remain. This paper provides the first experimental evidence on the important role of information asymmetries and disclosure designs in shaping user adoption of AI systems. We systematically vary the density of low-quality AI systems and the depth of disclosure requirements in a simulated AI product market to gauge how people react to the risk of accidentally relying on a low-quality AI system. Then, we compare participants’ choices to a rational Bayesian model, analyzing the degree to which partial information disclosure can improve AI adoption. Our results underscore the deleterious effects of information asymmetries on AI adoption, but also highlight the potential of partial disclosure designs to improve the overall efficiency of human decision-making.

著者
Alexander Erlei
University of Goettingen, Goettingen, Germany
Federico Maria Cau
University of Cagliari, Cagliari, Italy
Radoslav Georgiev
Delft University of Technology, Delft, Netherlands
Sagar Chethan Kumar
Columbia University, New York, New York, United States
Kilian Bizer
University of Goettingen, Goettingen, Germany
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands