Large-scale outdoor mixed reality (MR) art exhibitions distribute curated virtual works across open public spaces, but interpretation rarely scales without turning exploration into a scripted tour. Through Research-through-Design, we created Dream-Butterfly, an in-situ conversational AI docent embodied as a small non-human companion that visitors summon for multilingual, exhibition-grounded explanations. We deployed Dream-Butterfly in a large-scale outdoor MR exhibition at a public university campus in southern China, and conducted an in-the-wild between-subject study (N=24) comparing a primarily human-led tour with an AI-led tour while keeping staff for safety in both conditions. Combining questionnaires and semi-structured interviews, we characterize how shifting the primary explanation channel reshapes explanation access, perceived responsiveness, immersion, and workload, and how visitors negotiate responsibility handoffs among staff, the AI guide, and themselves. We distill transferable design implications for configuring mixed human-AI guiding roles and embodying conversational agents in mobile, safety-constrained outdoor MR exhibitions.
翻译:大规模户外混合现实艺术展览将策展的虚拟作品分布于开放的公共空间,但若要使作品解读达到同等规模,往往只能将自由探索转变为程式化的导览行程。本研究通过"通过设计进行研究"的方法,创建了Dream-Butterfly——一种现场对话式人工智能讲解员,其以小体型非人类同伴的形态呈现,参观者可随时召唤它以获取基于展览内容的多语言解释。我们将Dream-Butterfly部署于中国南方某公立大学校园内的大型户外混合现实展览中,并开展了野外环境下的组间对照研究(N=24),在两种条件下均保留工作人员以确保安全,对比了以人工导览为主和以人工智能导览为主的参观体验。通过结合问卷调查与半结构化访谈,我们分析了主要讲解渠道的转变如何重塑参观者的解释获取方式、感知响应度、沉浸体验与认知负荷,以及参观者如何在工作人员、人工智能导览与自身之间协商责任转移。最后,我们提炼出可迁移的设计启示,为移动式、安全受限的户外混合现实展览中配置人机协同导览角色及具身化对话智能体提供参考。