Large-scale outdoor mixed reality (MR) art exhibitions distribute curated virtual works across open public spaces, but interpretation rarely scales without turning exploration into a scripted tour. Through Research-through-Design, we created Dream-Butterfly, an in-situ conversational AI docent embodied as a small non-human companion that visitors summon for multilingual, exhibition-grounded explanations. We deployed Dream-Butterfly in a large-scale outdoor MR exhibition at a public university campus in southern China, and conducted an in-the-wild between-subject study (N=24) comparing a primarily human-led tour with an AI-led tour while keeping staff for safety in both conditions. Combining questionnaires and semi-structured interviews, we characterize how shifting the primary explanation channel reshapes explanation access, perceived responsiveness, immersion, and workload, and how visitors negotiate responsibility handoffs among staff, the AI guide, and themselves. We distill transferable design implications for configuring mixed human-AI guiding roles and embodying conversational agents in mobile, safety-constrained outdoor MR exhibitions.
翻译:大规模户外混合现实(MR)艺术展览将策展的虚拟作品分布于开放的公共空间,但若要使作品解读达到同等规模,往往只能将自由探索转变为程式化的导览路线。本研究通过设计研究方法,创建了Dream-Butterfly——一种现场对话式AI导览员,其以小型的非人类同伴形态呈现,参观者可随时召唤它以获取基于展览内容的多语言解释。我们将Dream-Butterfly部署于中国南方某公立大学校园内的一场大规模户外MR展览中,并开展了一项实地对照研究(N=24),比较了以人工导览为主和以AI导览为主的两种参观模式,两种模式下均配备工作人员以确保安全。通过结合问卷调查与半结构化访谈,我们分析了主要解释渠道的转变如何重塑解释获取途径、感知响应性、沉浸感与认知负荷,以及参观者如何在工作人员、AI导览员与自身之间协商责任转移。最后,我们提炼出可迁移的设计启示,以指导如何在移动性受限且需保障安全的户外MR展览中配置混合人机导览角色,并为对话式智能体进行实体化设计。