Privacy and Trust vs. Utility: Adoption of Commercial vs. Institutional AI assistants Among University Users

要旨

Generative AI assistants are being rapidly adopted in universities, supporting students in coursework and faculty in academic tasks. To address privacy concerns, some institutions introduced institutional AI assistants, typically wrappers around commercial models (e.g., ChatGPT) with added governance and data protections. However, university-affiliated users appear to rely more on commercial tools (e.g., ChatGPT, Gemini). We conducted a survey (n=260) at one U.S. university to examine preferences, usage scenarios, and perceptions of trust, privacy, and experience with institutional and commercial AI. Participants trusted institutional tools more and considered them more privacy protective, nevertheless commercial tools were often favored for writing, programming, and learning due to their features and utility. Findings reveal a trade-off between privacy and trust versus utility, highlighting complementary adoption patterns and design opportunities for both institutional and commercial AI in higher education.

著者
Yuting Yang
University of Michigan, Ann Arbor, Michigan, United States
Zixin Wang
University of Michigan, Ann Arbor, Ann Arbor, Michigan, United States
Rongjun Ma
Aalto University , Espoo, Finland
Florian Schaub
University of Michigan, Ann Arbor, Michigan, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Privacy Policies

P1 - Room 133
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00