In this work, we present Covo-Audio, a 7B-parameter end-to-end LALM that directly processes continuous audio inputs and generates audio outputs within a single unified architecture. Through large-scale curated pretraining and targeted post-training, Covo-Audio achieves state-of-the-art or competitive performance among models of comparable scale across a broad spectrum of tasks, including speech-text modeling, spoken dialogue, speech understanding, audio understanding, and full-duplex voice interaction. Extensive evaluations demonstrate that the pretrained foundation model exhibits strong speech-text comprehension and semantic reasoning capabilities on multiple benchmarks, outperforming representative open-source models of comparable scale. Furthermore, Covo-Audio-Chat, the dialogue-oriented variant, demonstrates strong spoken conversational abilities, including understanding, contextual reasoning, instruction following, and generating contextually appropriate and empathetic responses, validating its applicability to real-world conversational assistant scenarios. Covo-Audio-Chat-FD, the evolved full-duplex model, achieves substantially superior performance on both spoken dialogue capabilities and full-duplex interaction behaviors, demonstrating its competence in practical robustness. To mitigate the high cost of deploying end-to-end LALMs for natural conversational systems, we propose an intelligence-speaker decoupling strategy that separates dialogue intelligence from voice rendering, enabling flexible voice customization with minimal text-to-speech (TTS) data while preserving dialogue performance. Overall, our results highlight the strong potential of 7B-scale models to integrate sophisticated audio intelligence with high-level semantic reasoning, and suggest a scalable path toward more capable and versatile LALMs.
翻译:在本工作中,我们提出了 Covo-Audio,一个拥有 70 亿参数、可直接处理连续音频输入并在单一统一架构内生成音频输出的端到端 LALM。通过大规模精选预训练和针对性后训练,Covo-Audio 在语音-文本建模、口语对话、语音理解、音频理解以及全双工语音交互等一系列广泛任务上,取得了与同规模模型相比达到领先或具有竞争力的性能。广泛的评估表明,该预训练基础模型在多个基准测试中展现出强大的语音-文本理解与语义推理能力,超越了同规模的代表性开源模型。此外,面向对话的变体 Covo-Audio-Chat 展现出强大的口语对话能力,包括理解、上下文推理、指令遵循以及生成上下文恰当且富有同理心的回应,验证了其在现实世界对话助手场景中的适用性。进化后的全双工模型 Covo-Audio-Chat-FD 在口语对话能力和全双工交互行为两方面均实现了显著更优的性能,证明了其在实用鲁棒性方面的胜任力。为降低为自然对话系统部署端到端 LALM 的高昂成本,我们提出了一种智能-语音解耦策略,将对话智能与语音渲染分离,从而能够以最少的文本转语音数据实现灵活的语音定制,同时保持对话性能。总体而言,我们的结果突显了 70 亿参数规模模型在融合复杂音频智能与高级语义推理方面的巨大潜力,并指出了一条通往更强大、更通用 LALM 的可扩展路径。