Collaboration is at the heart of many complex tasks, and mixed reality (MR) offers a powerful new medium to support it. Understanding how teams coordinate in immersive environments is critical for designing effective MR applications that support collaborative work. However, existing methods rely on external observation systems and manual annotation, lacking deployable solutions for capturing temporal collaboration dynamics. We present MURMR, a system with two complementary modules that passively analyze multimodal interaction data from commodity MR headsets. Our structural analysis module constructs automated sociograms revealing group organization and roles, while our temporal analysis module performs unsupervised clustering to identify moment-to-moment dyad behavior patterns. Through a 48-participant study with egocentric video validation, we demonstrate that the structural module captures stable interaction patterns while the temporal module reveals substantial behavioral variability that session-level approaches miss. This dual-module architecture advances collaboration research by establishing that structural and temporal dynamics require separate analytical approaches, enabling both real-time group monitoring and detailed behavioral understanding in immersive collaborative environments.
翻译:协作是许多复杂任务的核心,混合现实(MR)为支持协作提供了强大的新型媒介。理解团队在沉浸式环境中如何协调对于设计支持协同工作的有效MR应用至关重要。然而,现有方法依赖外部观察系统和人工标注,缺乏用于捕捉时序协作动态的可部署解决方案。我们提出MURMR系统,该系统包含两个互补模块,可被动分析来自商用MR头显的多模态交互数据。我们的结构分析模块构建自动生成的社会图示以揭示群体组织与角色,而时序分析模块则执行无监督聚类以识别逐时刻的成对行为模式。通过一项包含48名参与者并辅以第一人称视频验证的研究,我们证明结构模块能捕捉稳定的交互模式,而时序模块则揭示了会话级方法所忽略的显著行为变异性。这种双模块架构通过证实结构动态与时间动态需要不同的分析方法,推动了协作研究的发展,为沉浸式协作环境中的实时群体监控和详细行为理解提供了可能。