Ensembles of generative large language models (LLMs) can integrate the strengths of different LLMs to compensate for the limitations of individual models. However, recent work has focused on training an additional fusion model to combine complete responses from multiple LLMs, failing to tap into their collaborative potential to generate higher-quality responses. Moreover, as the additional fusion model is trained on a specialized dataset, these methods struggle with generalizing to open-domain queries from online users. In this paper, we propose SpecFuse, a novel ensemble framework that outputs the fused result by iteratively producing the next segment through collaboration among LLMs. This is achieved through cyclic execution of its inference and verification components. In each round, the inference component invokes each base LLM to generate candidate segments in parallel, and the verify component calls these LLMs again to predict the ranking of the segments. The top-ranked segment is then broadcast to all LLMs, encouraging them to generate higher-quality segments in the next round. This approach also allows the base LLMs to be plug-and-play, without any training or adaptation, avoiding generalization limitations. Furthermore, to conserve computational resources, we propose a model exit mechanism that dynamically excludes models exhibiting poor performance in previous rounds during each query response. In this way, it effectively reduces the number of model calls while maintaining overall performance.
翻译:生成式大语言模型(LLM)的集成能够整合不同模型的优势,以弥补单一模型的局限性。然而,现有研究主要集中于训练额外的融合模型来组合多个LLM的完整回复,未能充分发挥其协作生成更高质量回复的潜力。此外,由于额外的融合模型在特定数据集上训练,这些方法难以泛化至在线用户提出的开放域查询。本文提出SpecFuse——一种新颖的集成框架,该框架通过LLM间的协作迭代生成下一片段来输出融合结果。这一过程通过其推理组件与验证组件的循环执行实现:在每一轮中,推理组件并行调用各基座LLM生成候选片段,验证组件再次调用这些LLM以预测片段的排序。排名最高的片段随后广播至所有LLM,激励它们在下一轮生成更高质量的片段。该方法还支持基座LLM即插即用,无需任何训练或适配,从而避免了泛化限制。此外,为节约计算资源,我们提出一种模型退出机制,能够在每次查询响应过程中动态排除先前轮次表现不佳的模型。这种方式在保持整体性能的同时,有效减少了模型调用次数。