Full-Duplex Speech Language Models (FD-SLMs) enable real-time, overlapping conversational interactions, offering a more dynamic user experience compared to traditional half-duplex models. However, existing benchmarks primarily focus on evaluating single-round interactions, neglecting the complexities of multi-round communication. Evaluating FD-SLMs in multi-round settings poses significant challenges, including blurred turn boundaries in communication and context inconsistency during model inference. Also, existing benchmarks often focus solely on evaluating conversational features, neglecting other critical aspects. To address these gaps, we introduce MTR-DuplexBench, a novel benchmark designed for a comprehensive multi-round evaluation of FD-SLMs. MTR-DuplexBench not only segments continuous full-duplex dialogues into discrete turns for turn-by-turn assessment but also incorporates various evaluation aspects, including conversational features, dialogue quality, instruction following, and safety. Experimental results reveal that current FD-SLMs face difficulties in maintaining consistent performance across multiple rounds and evaluation dimensions, highlighting the necessity and effectiveness of our benchmark. The benchmark and code will be available in the future.
翻译:全双工语音语言模型(FD-SLMs)能够实现实时、重叠的对话交互,相比传统的半双工模型提供了更具动态性的用户体验。然而,现有基准主要侧重于评估单轮交互,忽略了多轮通信的复杂性。在多轮场景下评估FD-SLMs面临重大挑战,包括通信中模糊的轮次边界以及模型推理过程中的上下文不一致性。此外,现有基准通常仅关注评估对话特征,而忽略了其他关键方面。为弥补这些不足,我们提出了MTR-DuplexBench,这是一个专为FD-SLMs全面多轮评估而设计的新型基准。MTR-DuplexBench不仅将连续的全双工对话分割为离散轮次以进行逐轮评估,还整合了多种评估维度,包括对话特征、对话质量、指令遵循以及安全性。实验结果表明,当前FD-SLMs在跨多轮次和多评估维度上难以保持一致的性能,这凸显了我们基准的必要性和有效性。该基准及相关代码将在未来公开。