Music generation schemes using language modeling rely on a vocabulary of audio tokens, generally provided as codes in a discrete latent space learnt by an auto-encoder. Multi-stage quantizers are often employed to produce these tokens, therefore the decoding strategy used for token prediction must be adapted to account for multiple codebooks: either it should model the joint distribution over all codebooks, or fit the product of the codebook marginal distributions. Modelling the joint distribution requires a costly increase in the number of auto-regressive steps, while fitting the product of the marginals yields an inexact model unless the codebooks are mutually independent. In this work, we introduce an independence-promoting loss to regularize the auto-encoder used as the tokenizer in language models for music generation. The proposed loss is a proxy for mutual information based on the maximum mean discrepancy principle, applied in reproducible kernel Hilbert spaces. Our criterion is simple to implement and train, and it is generalizable to other multi-stream codecs. We show that it reduces the statistical dependence between codebooks during auto-encoding. This leads to an increase in the generated music quality when modelling the product of the marginal distributions, while generating audio much faster than the joint distribution model.
翻译:采用语言建模的音乐生成方案依赖于音频标记的词汇表,这些标记通常作为离散潜在空间中由自编码器学习得到的编码提供。多阶段量化器常被用于生成这些标记,因此用于标记预测的解码策略必须进行调整以适应多个码本:它要么需要建模所有码本的联合分布,要么需要拟合各码本边缘分布的乘积。建模联合分布需要以高昂的计算成本增加自回归步骤的数量,而拟合边缘分布的乘积则会得到一个不精确的模型,除非各码本之间相互独立。在本研究中,我们引入了一种促进独立性的损失函数,用于正则化在音乐生成语言模型中作为标记器的自编码器。所提出的损失函数是基于最大均值差异原则的互信息代理,应用于再生核希尔伯特空间。我们的准则易于实现和训练,并且可推广到其他多流编解码器。我们证明了它在自编码过程中减少了码本间的统计依赖性。这导致在建模边缘分布乘积时,生成的音乐质量得到提升,同时音频生成速度远快于联合分布模型。