Disfluencies -- such as "um," "uh," interjections, parentheticals, and edited statements -- remain a persistent challenge for speech-driven systems, degrading accuracy in command interpretation, summarization, and conversational agents. We introduce DRES (Disfluency Removal Evaluation Suite), a controlled text-level benchmark that establishes a reproducible semantic upper bound for this task. DRES builds on human-annotated Switchboard transcripts, isolating disfluency removal from ASR errors and acoustic variability. We systematically evaluate proprietary and open-source LLMs across scales, prompting strategies, and architectures. Our results reveal that (i) simple segmentation consistently improves performance, even for long-context models; (ii) reasoning-oriented models tend to over-delete fluent tokens; and (iii) fine-tuning achieves near state-of-the-art precision and recall but harms generalization abilities. We further present a set of LLM-specific error modes and offer nine practical recommendations (R1-R9) for deploying disfluency removal in speech-driven pipelines. DRES provides a reproducible, model-agnostic foundation for advancing robust spoken-language systems.
翻译:不流畅现象——如“嗯”、“呃”等填充词、插入语、附加说明以及修正性语句——仍然是语音驱动系统面临的一个持续挑战,会降低命令理解、摘要生成和对话代理等任务的准确性。我们提出了DRES(不流畅性消除评估套件),这是一个受控的文本级基准测试,为该任务建立了可复现的语义性能上限。DRES基于人工标注的Switchboard转录文本构建,将不流畅性消除任务与自动语音识别错误及声学变异性分离开来。我们系统性地评估了不同规模、提示策略和架构下的专有及开源大语言模型。我们的结果表明:(i)简单的分段处理能持续提升性能,即使对于长上下文模型也是如此;(ii)倾向于推理的模型容易过度删除流畅的词汇单元;(iii)微调能达到接近最优的精确率和召回率,但会损害模型的泛化能力。我们进一步总结了一组大语言模型特有的错误模式,并提出了九条实用建议(R1-R9),用于在语音驱动流程中部署不流畅性消除功能。DRES为推进鲁棒的口语系统提供了一个可复现、模型无关的基础框架。