AI and My Values: User Perceptions of LLMs’ Ability to Extract, Embody, and Explain Human Values from Casual Conversations

要旨

Does AI understand human values? While this remains an open philosophical question, we take a pragmatic stance by introducing VAPT, the Value-Alignment Perception Toolkit, for studying how LLMs reflect people's values and how people judge those reflections. 20 participants texted a chatbot over a month, then completed a 2-hour interview with our toolkit evaluating AI's ability to extract (pull details regarding), embody (make decisions guided by), and explain (provide proof of) their values. 13 participants ultimately left our study convinced that AI can understand human values. Thus, we warn about "weaponized empathy": a design pattern that may arise in interactions with value-aware, yet welfare-misaligned conversational agents. VAPT offers a new way to evaluate value-alignment in AI systems. We also offer design implications to evaluate and responsibly build AI systems with transparency and safeguards as AI capabilities grow more inscrutable, ubiquitous, and posthuman into the future.

受賞
Honorable Mention
著者
Bhada Yun
ETH Zürich, Zürich, Switzerland
Renn Su
Stanford University, Stanford, California, United States
April Yi. Wang
ETH Zurich, Zurich, Switzerland
動画

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Role Play, Creativity and AI

P1 - Room 129
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00