We explore the use of Residual Vector Quantization (RVQ) for high-fidelity generation in vector-quantized generative models. This quantization technique maintains higher data fidelity by employing more in-depth tokens. However, increasing the token number in generative models leads to slower inference speeds. To this end, we introduce ResGen, an efficient RVQ-based discrete diffusion model that generates high-fidelity samples without compromising sampling speed. Our key idea is a direct prediction of vector embedding of collective tokens rather than individual ones. Moreover, we demonstrate that our proposed token masking and multi-token prediction method can be formulated within a principled probabilistic framework using a discrete diffusion process and variational inference. We validate the efficacy and generalizability of the proposed method on two challenging tasks across different modalities: conditional image generation} on ImageNet 256x256 and zero-shot text-to-speech synthesis. Experimental results demonstrate that ResGen outperforms autoregressive counterparts in both tasks, delivering superior performance without compromising sampling speed. Furthermore, as we scale the depth of RVQ, our generative models exhibit enhanced generation fidelity or faster sampling speeds compared to similarly sized baseline models. The project page can be found at https://resgen-genai.github.io
翻译:本研究探索了在向量量化生成模型中使用残差向量量化(RVQ)实现高保真生成的方法。该量化技术通过采用更深层的标记来保持更高的数据保真度。然而,增加生成模型中的标记数量会导致推理速度下降。为此,我们提出了ResGen——一种基于RVQ的高效离散扩散模型,该模型能够在保持采样速度的同时生成高保真样本。我们的核心思想是直接预测集体标记的向量嵌入,而非单个标记的嵌入。此外,我们证明了所提出的标记掩码和多标记预测方法可以通过离散扩散过程和变分推理在概率框架内进行形式化表述。我们在两个不同模态的挑战性任务上验证了所提方法的有效性和泛化能力:ImageNet 256×256的条件图像生成和零样本文本到语音合成。实验结果表明,ResGen在这两项任务中均优于自回归模型,在保持采样速度的同时实现了更优的性能。此外,随着RVQ深度的增加,与同等规模的基线模型相比,我们的生成模型展现出更高的生成保真度或更快的采样速度。项目页面详见 https://resgen-genai.github.io