Conversational Spoken Language Models (SLMs) are emerging as a promising paradigm for real-time speech interaction. However, their capacity of temporal dynamics, including the ability to manage timing, tempo and simultaneous speaking, remains a critical and unevaluated challenge for conversational fluency. To address this gap, we introduce the Game-Time Benchmark, a framework to systematically assess these temporal capabilities. Inspired by how humans learn a language through language activities, Game-Time consists of basic instruction-following tasks and advanced tasks with temporal constraints, such as tempo adherence and synchronized responses. Our evaluation of diverse SLM architectures reveals a clear performance disparity: while state-of-the-art models handle basic tasks well, many contemporary systems still struggle with fundamental instruction-following. More critically, nearly all models degrade substantially under temporal constraints, exposing persistent weaknesses in time awareness and full-duplex interaction. The Game-Time Benchmark provides a foundation for guiding future research toward more temporally-aware conversational AI. Demos and datasets are available on our project website https://ga642381.github.io/Game-Time.
翻译:对话式口语语言模型(SLMs)正成为实时语音交互领域一种前景广阔的新范式。然而,其处理时序动态的能力——包括对时间控制、语速调节以及同时说话的管理——仍然是影响对话流畅性的关键且尚未被充分评估的挑战。为填补这一空白,我们提出了Game-Time基准测试框架,以系统性地评估这些时序能力。该框架的构建灵感来源于人类如何通过语言活动学习语言,Game-Time包含基础指令跟随任务和具有时序约束的高级任务,例如语速遵循与同步响应。通过对多种SLM架构的评估,我们发现了明显的性能差异:虽然最先进的模型能较好地处理基础任务,但许多现有系统在基本指令跟随方面仍存在困难。更为关键的是,几乎所有模型在时序约束下性能均显著下降,这暴露了其在时间感知和全双工交互方面持续存在的弱点。Game-Time基准为引导未来研究朝向更具时序感知能力的对话式人工智能奠定了基础。演示及数据集详见项目网站 https://ga642381.github.io/Game-Time。