As the use of LLM chatbots by students and researchers becomes more prevalent, universities are pressed to develop AI strategies. One strategy that many universities pursue is to customize pre-trained LLM as-a-service (LLMaaS). While most studies on LLMaaS chatbots prioritize technical adaptations, we focus on psychological effects of user-salient customizations, such as interface changes. We assume that such customizations influence users' perception of the system and are therefore important in guiding safe and appropriate use. In a field study, we examine how students and employees (N = 526) at a German university perceive and use their institution's customized LLMaaS chatbot compared to ChatGPT. Participants using both systems (n = 116) reported greater trust, higher perceived privacy and less experienced hallucinations with their university's customized LLMaaS chatbot in contrast to ChatGPT. We discuss theoretical implications for research on calibrated trust, and offer guidance on the design and deployment of LLMaaS chatbots.
翻译:随着学生和研究人员使用LLM聊天机器人的现象日益普遍,各大学面临着制定人工智能战略的压力。许多大学采取的策略之一是对预训练的LLM即服务(LLMaaS)进行定制化。尽管大多数关于LLMaaS聊天机器人的研究侧重于技术适配,我们则关注用户感知显著的定制化(如界面变更)所带来的心理效应。我们假设此类定制化会影响用户对系统的感知,因此对引导安全且恰当的使用至关重要。在一项实地研究中,我们考察了一所德国大学的学生与员工(N = 526)如何感知并使用其机构定制的LLMaaS聊天机器人,并与ChatGPT进行对比。同时使用两种系统的参与者(n = 116)报告称,相较于ChatGPT,他们对大学定制的LLMaaS聊天机器人表现出更高的信任度、更强的隐私感知以及更少经历的幻觉现象。我们讨论了关于校准信任研究的理论意义,并为LLMaaS聊天机器人的设计与部署提供了指导。