Practitioners have access to an abundance of language models and prompting strategies for solving many language modeling tasks; yet prior work shows that modeling performance is highly sensitive to both choices. Classical machine learning ensembling techniques offer a principled approach: aggregate predictions from multiple sources to achieve better performance than any single one. However, applying ensembling to language models during decoding is challenging: naively aggregating next-token probabilities yields samples from a locally normalized, biased approximation of the generally intractable ensemble distribution over strings. In this work, we introduce a unified framework for composing $K$ language models into $f$-ensemble distributions for a wide range of functions $f\colon\mathbb{R}_{\geq 0}^{K}\to\mathbb{R}_{\geq 0}$. To sample from these distributions, we propose a byte-level sequential Monte Carlo (SMC) algorithm that operates in a shared character space, enabling ensembles of models with mismatching vocabularies and consistent sampling in the limit. We evaluate a family of $f$-ensembles across prompt and model combinations for various structured text generation tasks, highlighting the benefits of alternative aggregation strategies over traditional probability averaging, and showing that better posterior approximations can yield better ensemble performance.
翻译:针对众多语言建模任务,实践者拥有丰富的语言模型和提示策略可供选择;然而先前研究表明,建模性能对这两类选择均高度敏感。经典机器学习集成技术提供了一种原则性方法:聚合多个来源的预测以获得优于任一单体的性能。然而,在解码阶段对语言模型应用集成具有挑战性:简单聚合下一词元概率会从局部归一化的、有偏的近似分布中采样,而该近似分布对应的是字符串空间上通常难以处理的集成分布。本文提出一个统一框架,可将$K$个语言模型组合为$f$-集成分布,其中函数$f\colon\mathbb{R}_{\geq 0}^{K}\to\mathbb{R}_{\geq 0}$具有广泛的选择范围。为从这些分布中采样,我们提出一种字节级序贯蒙特卡洛算法,该算法在共享字符空间运行,能够集成具有不匹配词表的模型,并实现极限意义下的一致性采样。我们在多种结构化文本生成任务中,针对不同提示与模型组合评估了一系列$f$-集成方法,结果表明:相较于传统概率平均策略,替代性聚合策略具有显著优势,且更精确的后验近似能够带来更优的集成性能。