Large Audio Language Models (LALMs) have garnered significant research interest. Despite being built upon text-based large language models (LLMs), LALMs frequently exhibit a degradation in knowledge and reasoning capabilities. We hypothesize that this limitation stems from the failure of current training paradigms to effectively bridge the acoustic-semantic gap within the feature representation space. To address this challenge, we propose CORD, a unified alignment framework that performs online cross-modal self-distillation. Specifically, it aligns audio-conditioned reasoning with its text-conditioned counterpart within a unified model. Leveraging the text modality as an internal teacher, CORD performs multi-granularity alignment throughout the audio rollout process. At the token level, it employs on-policy reverse KL divergence with importance-aware weighting to prioritize early and semantically critical tokens. At the sequence level, CORD introduces a judge-based global reward to optimize complete reasoning trajectories via Group Relative Policy Optimization (GRPO). Empirical results across multiple benchmarks demonstrate that CORD consistently enhances audio-conditioned reasoning and substantially bridges the audio-text performance gap with only 80k synthetic training samples, validating the efficacy and data efficiency of our on-policy, multi-level cross-modal alignment approach.
翻译:大型音频语言模型(LALMs)已引起广泛研究关注。尽管基于文本大型语言模型(LLMs)构建,LALMs 常表现出知识与推理能力的退化。我们推测该局限源于当前训练范式未能有效弥合特征表示空间中的声学-语义鸿沟。为应对此挑战,我们提出 CORD——一个执行在线跨模态自蒸馏的统一对齐框架。具体而言,它在统一模型内将音频条件推理与其文本条件对应部分进行对齐。CORD 利用文本模态作为内部教师,在音频生成全过程中执行多粒度对齐:在令牌级别,采用具有重要性感知加权的在线策略反向 KL 散度,以优先处理早期及语义关键令牌;在序列级别,引入基于评判器的全局奖励,通过组相对策略优化(GRPO)优化完整推理轨迹。跨多个基准的实验结果表明,CORD 持续增强音频条件推理能力,仅用 8 万合成训练样本即显著缩小音频-文本性能差距,验证了我们在线策略、多层级跨模态对齐方法的有效性与数据效率。