Generative models with discrete latent representations have recently demonstrated an impressive ability to learn complex high-dimensional data distributions. However, their performance relies on a long sequence of tokens per instance and a large number of codebook entries, resulting in long sampling times and considerable computation to fit the categorical posterior. To address these issues, we propose the Masked Vector Quantization (MVQ) framework which increases the representational capacity of each code vector by learning mask configurations via a stochastic winner-takes-all training regime called Multiple Hypothese Dropout (MH-Dropout). On ImageNet 64$\times$64, MVQ reduces FID in existing vector quantization architectures by up to $68\%$ at 2 tokens per instance and $57\%$ at 5 tokens. These improvements widen as codebook entries is reduced and allows for $7\textit{--}45\times$ speed-up in token sampling during inference. As an additional benefit, we find that smaller latent spaces lead to MVQ identifying transferable visual representations where multiple can be smoothly combined.
翻译:具有离散潜在表示机制的生成模型近期展现出学习复杂高维数据分布的卓越能力,但其性能依赖于每个样本的长序列化标记和大量码本条目,导致采样时间长且拟合分类后验需大量计算。为解决这些问题,我们提出掩蔽向量量化(MVQ)框架,通过名为多重假设丢弃(MH-Dropout)的随机胜者全拿训练策略学习掩蔽配置,从而提升每个码向量的表征容量。在ImageNet 64$\times$64数据集上,MVQ将现有向量量化架构在每样本2个标记时的FID降低高达68%,每样本5个标记时降低57%。随着码本条目减少,性能提升幅度进一步扩大,推理阶段可实现$7\textit{--}45$倍的标记采样加速。此外,我们发现更小的潜在空间使MVQ能够识别可迁移的视觉表征,且多个表征可平滑组合。