Self-consistency-based approaches, which involve repeatedly sampling multiple outputs and selecting the most consistent one as the final response, prove to be remarkably effective in improving the factual accuracy of large language models. Nonetheless, existing methods usually have strict constraints on the task format, largely limiting their applicability. In this paper, we present Integrative Decoding (ID), to unlock the potential of self-consistency in open-ended generation tasks. ID operates by constructing a set of inputs, each prepended with a previously sampled response, and then processes them concurrently, with the next token being selected by aggregating of all their corresponding predictions at each decoding step. In essence, this simple approach implicitly incorporates self-consistency in the decoding objective. Extensive evaluation shows that ID consistently enhances factuality over a wide range of language models, with substantial improvements on the TruthfulQA (+11.2%), Biographies (+15.4%) and LongFact (+8.5%) benchmarks. The performance gains amplify progressively as the number of sampled responses increases, indicating the potential of ID to scale up with repeated sampling.
翻译:基于自一致性的方法——即重复采样多个输出并选择最一致的结果作为最终响应——被证明在提升大语言模型的事实准确性方面极为有效。然而,现有方法通常对任务格式有严格限制,这在很大程度上制约了其适用性。本文提出整合解码,以释放自一致性在开放式生成任务中的潜力。整合解码通过构建一组输入(每个输入均前置一个先前采样的响应)来运作,随后并行处理这些输入,并在每个解码步骤中通过聚合所有相应预测来选择下一个词元。本质上,这种简单方法在解码目标中隐式地融入了自一致性。大量评估表明,整合解码在多种语言模型上持续提升了事实准确性,在TruthfulQA(+11.2%)、Biographies(+15.4%)和LongFact(+8.5%)基准测试中取得了显著改进。随着采样响应数量的增加,性能增益逐步放大,这表明整合解码具备通过重复采样实现扩展的潜力。