During language model decoding, it is known that using higher temperature sampling gives more creative responses, while lower temperatures are more factually accurate. However, such models are commonly applied to general instruction following, which involves both creative and fact seeking tasks, using a single fixed temperature across all examples and tokens. In this work, we introduce Adaptive Decoding, a layer added to the model to select the sampling temperature dynamically at inference time, at either the token or example level, in order to optimize performance. To learn its parameters we introduce Latent Preference Optimization (LPO) a general approach to train discrete latent variables such as choices of temperature. Our method outperforms all fixed decoding temperatures across a range of tasks that require different temperatures, including UltraFeedback, Creative Story Writing, and GSM8K.
翻译:在语言模型解码过程中,已知使用较高的采样温度会产生更具创造性的响应,而较低温度则更具事实准确性。然而,此类模型通常应用于通用指令跟随任务,其中既包含创造性任务也包含事实查询任务,却对所有样本和标记使用单一固定温度。本工作中,我们提出自适应解码——一种在推理时动态选择采样温度的附加层,可在标记或样本级别运作以优化性能。为学习其参数,我们提出了潜在偏好优化(LPO),这是一种训练离散潜在变量(如温度选择)的通用方法。我们的方法在需要不同温度的一系列任务(包括UltraFeedback、创意故事写作和GSM8K)中,均优于所有固定解码温度方案。