Large Language Models (LLMs) have rapidly advanced, with Gemini-3-Pro setting a new performance milestone. In this work, we explore collective intelligence as an alternative to monolithic scaling, and demonstrate that open-source LLMs' collaboration can surpass Gemini-3-Pro. We first revisit LLM routing and aggregation at scale and identify three key bottlenecks: (1) current train-free routers are limited by a query-based paradigm focusing solely on textual similarity; (2) recent aggregation methods remain largely static, failing to select appropriate aggregators for different tasks;(3) the complementarity of routing and aggregation remains underutilized. To address these problems, we introduce JiSi, a novel framework designed to release the full potential of LLMs' collaboration through three innovations: (1) Query-Response Mixed Routing capturing both semantic information and problem difficulty; (2) Support-Set-based Aggregator Selection jointly evaluating the aggregation and domain capacity of aggregators; (3) Adaptive Routing-Aggregation Switch dynamically leveraging the advantages of routing and aggregation. Comprehensive experiments on nine benchmarks demonstrate that JiSi can surpass Gemini-3-Pro with only 47% costs by orchestrating ten open-source LLMs, while outperforming mainstream baselines. It suggests that collective intelligence represents a novel path towards Artificial General Intelligence (AGI).
翻译:大型语言模型(LLM)发展迅速,Gemini-3-Pro树立了新的性能里程碑。本研究探索了集体智能作为单一模型规模扩展的替代方案,并证明开源LLM的协作可以超越Gemini-3-Pro。我们首先重新审视了大规模LLM路由与聚合,并识别出三个关键瓶颈:(1)当前无需训练的路由器受限于仅关注文本相似度的基于查询的范式;(2)近期的聚合方法大多仍是静态的,未能为不同任务选择合适的聚合器;(3)路由与聚合的互补性仍未得到充分利用。为解决这些问题,我们提出了JiSi这一新颖框架,旨在通过三项创新充分释放LLM协作的潜力:(1)查询-响应混合路由,同时捕获语义信息和问题难度;(2)基于支持集的聚合器选择,联合评估聚合器的聚合能力与领域能力;(3)自适应路由-聚合切换,动态利用路由与聚合的优势。在九个基准测试上的综合实验表明,JiSi通过协调十个开源LLM,仅以47%的成本即可超越Gemini-3-Pro,同时优于主流基线方法。这表明集体智能代表了一条通往通用人工智能(AGI)的新路径。