Large language model (LLM) routing aims to exploit the specialized strengths of different LLMs for diverse tasks. However, existing approaches typically focus on selecting LLM architectures while overlooking parameter settings, which are critical for task performance. In this paper, we introduce HAPS, a hierarchical LLM routing framework that jointly searches over model architectures and parameters. Specifically, we use a high-level router to select among candidate LLM architectures, and then search for the optimal parameters for the selected architectures based on a low-level router. We design a parameter generation network to share parameters between the two routers to mutually enhance their capabilities. In the training process, we design a reward-augmented objective to effectively optimize our framework. Experiments on two commonly used benchmarks show that HAPS consistently outperforms strong routing baselines. We have released our code at https://github.com/zihangtian/HAPS.
翻译:大语言模型(LLM)路由旨在利用不同LLM在多样化任务中的专业化优势。然而,现有方法通常侧重于选择LLM架构,而忽略了参数设置——这对任务性能至关重要。本文提出HAPS,一种分层LLM路由框架,能够联合搜索模型架构与参数。具体而言,我们使用高层路由器在候选LLM架构中进行选择,随后基于低层路由器为所选架构搜索最优参数。我们设计了一个参数生成网络,通过在两个路由器间共享参数以相互增强其能力。在训练过程中,我们设计了奖励增强目标以有效优化该框架。在两个常用基准测试上的实验表明,HAPS持续优于现有强路由基线方法。代码已发布于 https://github.com/zihangtian/HAPS。