Advances in hardware and language model architecture have spurred a revolution in natural language generation. However, autoregressive models compute probability distributions over next-token choices, and sampling from these distributions, known as decoding, has received significantly less attention than other design choices. Existing decoding strategies are largely based on heuristics, resulting in methods that are hard to apply or improve in a principled manner. We develop the theory of decoding strategies for language models by expressing popular decoding algorithms as equilibrium states in the language of ergodic theory and stating the functions they optimize. Using this, we analyze the effect of the local normalization step of top-k, nucleus, and temperature sampling, used to make probabilities sum to one. We argue that local normalization distortion is a fundamental defect of decoding strategies and quantify the size of this distortion and its effect on mathematical proxies for the quality and diversity of generated text. Contrary to the prevailing explanation, we argue that the major cause of the under-performance of top-k sampling relative to nucleus sampling is local normalization distortion. This yields conclusions for the future design of decoding algorithms and the detection of machine-generated text.
翻译:硬件和语言模型架构的进步推动了自然语言生成的革命。然而,自回归模型计算的是下一个词元选择的概率分布,而从这些分布中采样的过程(即解码)所受到的关注度远低于其他设计选择。现有的解码策略大多基于启发式方法,导致这些方法难以在原理层面应用或改进。我们通过将流行的解码算法表述为遍历理论语言中的平衡态,并阐明它们优化的函数,从而建立了语言模型解码策略的理论。基于此,我们分析了 top-k、核采样和温度采样中用于使概率之和为一的局部归一化步骤的影响。我们认为局部归一化失真是解码策略的一个根本性缺陷,并量化了这种失真的大小及其对生成文本质量和多样性的数学代理指标的影响。与主流解释相反,我们认为 top-k 采样相对于核采样表现不佳的主要原因是局部归一化失真。这为未来解码算法的设计和机器生成文本的检测提供了结论。