Generative AI assistants are being rapidly adopted in universities, supporting students in coursework and faculty in academic tasks. To address privacy concerns, some institutions introduced institutional AI assistants, typically wrappers around commercial models (e.g., ChatGPT) with added governance and data protections. However, university-affiliated users appear to rely more on commercial tools (e.g., ChatGPT, Gemini). We conducted a survey (n=260) at one U.S. university to examine preferences, usage scenarios, and perceptions of trust, privacy, and experience with institutional and commercial AI. Participants trusted institutional tools more and considered them more privacy protective, nevertheless commercial tools were often favored for writing, programming, and learning due to their features and utility. Findings reveal a trade-off between privacy and trust versus utility, highlighting complementary adoption patterns and design opportunities for both institutional and commercial AI in higher education.
ACM CHI Conference on Human Factors in Computing Systems