This paper presents a comparative study of context management strategies for end-to-end Spoken Dialog State Tracking using Speech-LLMs. We systematically evaluate traditional multimodal context (combining text history and spoken current turn), full spoken history, and compressed spoken history approaches. Our experiments on the SpokenWOZ corpus demonstrate that providing the full spoken conversation as input yields the highest performance among models of similar size, significantly surpassing prior methods. Furthermore, we show that attention-pooling-based compression of the spoken history offers a strong trade-off, maintaining competitive accuracy with reduced context size. Detailed analysis confirms that improvements stem from more effective context utilization.
翻译:本文针对基于 Speech-LLM 的端到端口语对话状态跟踪,开展了一项上下文管理策略的比较研究。我们系统评估了传统多模态上下文(结合文本历史与当前轮次语音)、完整语音历史以及压缩语音历史三种方法。在 SpokenWOZ 语料库上的实验表明,在模型规模相近的情况下,以完整语音对话作为输入能获得最佳性能,显著超越先前方法。此外,我们发现基于注意力池化的语音历史压缩策略提供了良好的折衷方案,在减小上下文规模的同时保持了有竞争力的准确率。详细分析证实,性能提升源于更有效的上下文利用。