AIGC has rapidly expanded from text-to-image generation toward high-quality multimodal synthesis across video and audio. Within this context, joint audio-video generation (JAVG) has emerged as a fundamental task that produces synchronized and semantically aligned sound and vision from textual descriptions. However, compared with advanced commercial models such as Veo3, existing open-source methods still suffer from limitations in generation quality, temporal synchrony, and alignment with human preferences. To bridge the gap, this paper presents JavisDiT++, a concise yet powerful framework for unified modeling and optimization of JAVG. First, we introduce a modality-specific mixture-of-experts (MS-MoE) design that enables cross-modal interaction efficacy while enhancing single-modal generation quality. Then, we propose a temporal-aligned RoPE (TA-RoPE) strategy to achieve explicit, frame-level synchronization between audio and video tokens. Besides, we develop an audio-video direct preference optimization (AV-DPO) method to align model outputs with human preference across quality, consistency, and synchrony dimensions. Built upon Wan2.1-1.3B-T2V, our model achieves state-of-the-art performance merely with around 1M public training entries, significantly outperforming prior approaches in both qualitative and quantitative evaluations. Comprehensive ablation studies have been conducted to validate the effectiveness of our proposed modules. All the code, model, and dataset are released at https://JavisVerse.github.io/JavisDiT2-page.
翻译:AIGC已从文本到图像生成迅速扩展到跨视频和音频的高质量多模态合成。在此背景下,联合音视频生成已成为一项基础任务,其目标是从文本描述中生成同步且语义对齐的声音与视觉内容。然而,与Veo3等先进的商业模型相比,现有的开源方法在生成质量、时间同步性以及与人类偏好的对齐方面仍存在局限。为弥补这一差距,本文提出了JavisDiT++,一个简洁而强大的统一建模与优化框架。首先,我们引入了一种模态特定的专家混合设计,该设计在提升单模态生成质量的同时,实现了高效的跨模态交互。其次,我们提出了一种时间对齐的RoPE策略,以实现音频与视频token之间显式的帧级同步。此外,我们开发了一种音视频直接偏好优化方法,以在质量、一致性和同步性等多个维度上将模型输出与人类偏好对齐。基于Wan2.1-1.3B-T2V构建,我们的模型仅使用约100万条公开训练数据便达到了最先进的性能,在定性和定量评估中均显著优于先前方法。全面的消融研究验证了我们所提出模块的有效性。所有代码、模型和数据集均已发布于https://JavisVerse.github.io/JavisDiT2-page。