As the use of LLM chatbots by students and researchers becomes more prevalent, universities are pressed to develop AI strategies. One strategy that many universities pursue is to customize pre-trained LLM-as-a-service (LLMaaS) chatbots. While most studies on LLMaaS chatbots prioritize technical adaptations, these systems are often mainly characterized by user-salient front-end customizations, e.g., interface changes. Yet, no existing studies have examined how users perceive such systems compared to commercial LLM chatbots. In a field study, we investigate how students and employees (N = 526) at a German university perceive and use their institution's customized LLMaaS chatbot compared to ChatGPT. Participants using both systems (n = 116) reported greater trust, higher perceived privacy, and less perceived hallucinations with their university's customized LLMaaS chatbot compared to ChatGPT. We discuss implications for research on users' trustworthiness assessment process, and offer guidance for the design and deployment of LLMaaS chatbots.
ACM CHI Conference on Human Factors in Computing Systems