Existing mainstream video customization methods focus on generating identity-consistent videos based on given reference images and textual prompts. Benefiting from the rapid advancement of joint audio-video generation, this paper proposes a more compelling new task: sync audio-video customization, which aims to synchronously customize both video identity and audio timbre. Specifically, given a reference image $I^{r}$ and a reference audio $A^{r}$, this novel task requires generating videos that maintain the identity of the reference image while imitating the timbre of the reference audio, with spoken content freely specifiable through user-provided textual prompts. To this end, we propose OmniCustom, a powerful DiT-based audio-video customization framework that can synthesize a video following reference image identity, audio timbre, and text prompts all at once in a zero-shot manner. Our framework is built on three key contributions. First, identity and audio timbre control are achieved through separate reference identity and audio LoRA modules that operate through self-attention layers within the base audio-video generation model. Second, we introduce a contrastive learning objective alongside the standard flow matching objective. It uses predicted flows conditioned on reference inputs as positive examples and those without reference conditions as negative examples, thereby enhancing the model ability to preserve identity and timbre. Third, we train OmniCustom on our constructed large-scale, high-quality audio-visual human dataset. Extensive experiments demonstrate that OmniCustom outperforms existing methods in generating audio-video content with consistent identity and timbre fidelity.
翻译:现有主流视频定制方法侧重于基于给定参考图像和文本提示生成身份一致的视频。得益于联合音视频生成的快速发展,本文提出了一项更具吸引力的新任务:同步音视频定制,其目标是同步定制视频身份与音频音色。具体而言,给定参考图像 $I^{r}$ 和参考音频 $A^{r}$,该新任务要求生成的视频在保持参考图像身份的同时模仿参考音频的音色,且可通过用户提供的文本提示自由指定语音内容。为此,我们提出了OmniCustom,一个基于DiT的强大音视频定制框架,能够以零样本方式一次性合成遵循参考图像身份、音频音色和文本提示的视频。我们的框架建立在三个关键贡献之上。首先,身份和音频音色控制通过独立的参考身份LoRA模块和音频LoRA模块实现,这些模块作用于基础音视频生成模型内的自注意力层。其次,我们在标准流匹配目标之外引入了对比学习目标。该目标将以参考输入为条件预测的流作为正例,将无参考条件预测的流作为负例,从而增强模型保持身份和音色的能力。第三,我们在自建的大规模高质量视听人像数据集上训练OmniCustom。大量实验表明,OmniCustom在生成具有一致身份和音色保真度的音视频内容方面优于现有方法。