"Here the GPT made a choice, and every choice can be biased": How Students Critically Engage with LLMs through End-User Auditing Activity

要旨

Despite recognizing that Large Language Models (LLMs) can generate inaccurate or unacceptable responses, universities are increasingly making such models available to their students. Existing university policies defer the responsibility of checking for correctness and appropriateness of LLM responses to students and assume that they will have the required knowledge and skills to do so on their own. In this work, we conducted a series of user studies with students (N=47) from a large North American public research university to understand if and how they critically engage with LLMs. Our participants evaluated an LLM provided by the university in a quasi-experimental setup; first by themselves, and then with a scaffolded design probe that guided them through an end-user auditing exercise. Qualitative analysis of participant think-aloud and LLM interaction data showed that students without basic AI literacy skills struggle to conceptualize and evaluate LLM biases on their own. However, they transition to focused thinking and purposeful interactions when provided with structured guidance. We highlight areas where current university policies may fall short and offer policy and design recommendations to better support students.

著者
Snehal Prabhudesai
University of Michigan, Ann Arbor, Michigan, United States
Ananya Prashant. Kasi
University of Michigan, Ann Arbor, Michigan, United States
Anmol Mansingh
University of Michigan, Ann Arbor, Michigan, United States
Anindya Das Antar
University of Michigan, Ann Arbor, Michigan, United States
Hua Shen
University of Michigan, Ann Arbor, Michigan, United States
Nikola Banovic
University of Michigan, Ann Arbor, Michigan, United States
DOI

10.1145/3706598.3713714

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713714

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Tech and AI Literacy

G416+G417
7 件の発表
2025-04-29 18:00:00
2025-04-29 19:30:00
日本語まとめ
読み込み中…