Achieving a balance between high-fidelity visual quality and low-latency streaming remains a formidable challenge in audio-driven portrait generation. Existing large-scale models often suffer from prohibitive computational costs, while lightweight alternatives typically compromise on holistic facial representations and temporal stability. In this paper, we propose SoulX-FlashHead, a unified 1.3B-parameter framework designed for real-time, infinite-length, and high-fidelity streaming video generation. To address the instability of audio features in streaming scenarios, we introduce Streaming-Aware Spatiotemporal Pre-training equipped with a Temporal Audio Context Cache mechanism, which ensures robust feature extraction from short audio fragments. Furthermore, to mitigate the error accumulation and identity drift inherent in long-sequence autoregressive generation, we propose Oracle-Guided Bidirectional Distillation, leveraging ground-truth motion priors to provide precise physical guidance. We also present VividHead, a large-scale, high-quality dataset containing 782 hours of strictly aligned footage to support robust training. Extensive experiments demonstrate that SoulX-FlashHead achieves state-of-the-art performance on HDTF and VFHQ benchmarks. Notably, our Lite variant achieves an inference speed of 96 FPS on a single NVIDIA RTX 4090, facilitating ultra-fast interaction without sacrificing visual coherence.
翻译:在音频驱动的肖像生成中,实现高保真视觉质量与低延迟流式处理之间的平衡仍然是一项艰巨的挑战。现有的大规模模型通常面临高昂的计算成本,而轻量级替代方案则通常在整体面部表征和时间稳定性方面做出妥协。本文提出SoulX-FlashHead,一个统一的13亿参数框架,专为实时、无限长度和高保真的流式视频生成而设计。为解决流式场景中音频特征的不稳定性,我们引入了配备时序音频上下文缓存机制的流式感知时空预训练,确保从短音频片段中提取鲁棒的特征。此外,为缓解长序列自回归生成中固有的误差累积和身份漂移问题,我们提出了Oracle引导的双向蒸馏,利用真实运动先验来提供精确的物理指导。我们还提出了VividHead,一个大规模、高质量的数据集,包含782小时的严格对齐视频片段,以支持鲁棒的训练。大量实验表明,SoulX-FlashHead在HDTF和VFHQ基准测试中达到了最先进的性能。值得注意的是,我们的Lite变体在单张NVIDIA RTX 4090上实现了96 FPS的推理速度,在不牺牲视觉连贯性的前提下实现了超快速交互。