Ontologies in Design: How Imagining a Tree Reveals Possibilities and Assumptions in Large Language Models

要旨

Amid the recent uptake of Generative AI, sociotechnical scholars and critics have traced a multitude of resulting harms, with analyses largely focused on values and axiology (e.g., bias). While value-based analyses are crucial, we argue that ontologies—concerning what we allow ourselves to think or talk about—is a vital but under-recognized dimension in analyzing these systems. Proposing a need for a practice-based engagement with ontologies, we offer four orientations for considering ontologies in design: pluralism, groundedness, liveliness, and enactment. We share examples of potentialities that are opened up through these orientations across the entire LLM development pipeline by conducting two ontological analyses: examining the responses of four LLM-based chatbots in a prompting exercise, and analyzing the architecture of an LLM-based agent simulation. We conclude by sharing opportunities and limitations of working with ontologies in the design and development of sociotechnical systems.

著者
Nava Haghighi
Stanford University, Stanford, California, United States
Sunny Yu
Stanford University , Stanford , California, United States
James A.. Landay
Stanford University, Stanford, California, United States
Daniela Rosner
University of Washington, Seattle, Washington, United States
DOI

10.1145/3706598.3713633

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713633

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: DeIving into LLMs

G303
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…