Joint audio-video generation aims to synthesize synchronized multisensory content, yet current unified models struggle with fine-grained acoustic control, particularly for identity-preserving speech. Existing approaches either suffer from temporal misalignment due to cascaded generation or lack the capability to perform zero-shot voice cloning within a joint synthesis framework. In this work, we present MM-Sonate, a multimodal flow-matching framework that unifies controllable audio-video joint generation with zero-shot voice cloning capabilities. Unlike prior works that rely on coarse semantic descriptions, MM-Sonate utilizes a unified instruction-phoneme input to enforce strict linguistic and temporal alignment. To enable zero-shot voice cloning, we introduce a timbre injection mechanism that effectively decouples speaker identity from linguistic content. Furthermore, addressing the limitations of standard classifier-free guidance in multimodal settings, we propose a noise-based negative conditioning strategy that utilizes natural noise priors to significantly enhance acoustic fidelity. Empirical evaluations demonstrate that MM-Sonate establishes new state-of-the-art performance in joint generation benchmarks, significantly outperforming baselines in lip synchronization and speech intelligibility, while achieving voice cloning fidelity comparable to specialized Text-to-Speech systems.
翻译:联合音视频生成旨在合成同步的多感官内容,然而当前的统一模型在细粒度声学控制方面存在困难,尤其是在保持说话人身份的语音生成上。现有方法要么因级联生成而导致时序错位,要么缺乏在联合合成框架内执行零样本语音克隆的能力。在本工作中,我们提出了MM-Sonate,一个多模态流匹配框架,它将可控的音视频联合生成与零样本语音克隆能力统一起来。与先前依赖粗略语义描述的工作不同,MM-Sonate利用统一的指令-音素输入来强制执行严格的语言学和时序对齐。为了实现零样本语音克隆,我们引入了一种音色注入机制,有效地将说话人身份与语言内容解耦。此外,针对标准无分类器引导在多模态设置中的局限性,我们提出了一种基于噪声的负条件策略,该策略利用自然噪声先验来显著提高声学保真度。实证评估表明,MM-Sonate在联合生成基准测试中确立了新的最先进性能,在唇部同步和语音清晰度方面显著优于基线模型,同时实现了与专用文本到语音系统相媲美的语音克隆保真度。