Ensembling Large Language Models (LLMs) has gained attention as a promising approach to surpass the performance of individual models by leveraging their complementary strengths. In particular, aggregating models' next-token probability distributions to select the next token has been shown to be effective in various tasks. However, while successful for short-form answers, its application to long-form generation remains underexplored. In this paper, we show that using existing ensemble methods in long-form generation requires a careful choice of ensembling positions, since the standard practice of ensembling at every token often degrades performance. We identify two key factors for determining these positions: tokenization mismatch across models and consensus in their next-token probability distributions. Based on this, we propose SAFE, (Stable And Fast LLM Ensembling), a framework that selectively ensembles by jointly considering these factors. To further improve stability, we introduce a probability sharpening strategy that consolidates probabilities spread across multiple sub-word tokens representing the same word into a single representative token. Our experiments on diverse benchmarks, including MATH500 and BBH, demonstrate that SAFE outperforms existing methods in both accuracy and efficiency, with gains achieved even when ensembling fewer than 1% of tokens.
翻译:集成大语言模型已成为一种备受关注的方法,通过利用不同模型的互补优势来超越单个模型的性能。具体而言,聚合模型的下一个词元概率分布以选择下一个词元,已被证明在各种任务中行之有效。然而,尽管该方法在短答案生成中取得了成功,其在长文本生成中的应用仍待深入探索。本文指出,在长文本生成中使用现有集成方法需要谨慎选择集成位置,因为在每个词元处进行集成的标准做法往往会降低性能。我们确定了决定这些位置的两个关键因素:模型间的分词不匹配以及它们在下一个词元概率分布上的共识。基于此,我们提出了SAFE(稳定快速的大语言模型集成)框架,该框架通过综合考虑这些因素进行选择性集成。为了进一步提升稳定性,我们引入了一种概率锐化策略,该策略将分布在代表同一单词的多个子词词元上的概率合并到单个代表性词元中。我们在包括MATH500和BBH在内的多样化基准测试上的实验表明,SAFE在准确性和效率上均优于现有方法,即使在集成少于1%的词元时也能实现性能提升。