In language tasks that require extensive human--model interaction, deploying a single "best" model for every query can be expensive. To reduce inference cost while preserving the quality of the responses, a large language model (LLM) router selects the most appropriate model from a pool of candidates for each query. A central challenge to training a high-quality router is the scarcity of reliable supervision. Gold-standard data (e.g., expert-verified labels or rubric-based scores) provide accurate quality evaluations of LLM responses but are costly and difficult to scale. In contrast, preference-based data, collected via crowdsourcing or LLM-as-a-judge systems, are cheaper and more scalable, yet often biased in reflecting the true quality of responses. We cast the problem of LLM router training with combined gold-standard and preference-based data into a causal inference framework by viewing the response evaluation mechanism as the treatment assignment. This perspective further reveals that the bias in preference-based data corresponds to the well-known causal estimand: the conditional average treatment effect. Based on this new perspective, we develop an integrative causal router training framework that corrects preference-data bias, address imbalances between two data sources, and improve routing robustness and efficiency. Numerical experiments demonstrate that our approach delivers more accurate routing and improves the trade-off between cost and quality.
翻译:在需要大量人机交互的语言任务中,为每个查询部署单一“最佳”模型成本高昂。为在保持响应质量的同时降低推理成本,大语言模型(LLM)路由器会从候选模型池中为每个查询选择最合适的模型。训练高质量路由器的核心挑战在于可靠监督数据的稀缺性。金标准数据(如专家验证标签或基于量规的评分)能提供LLM响应的精确质量评估,但成本高昂且难以扩展。相比之下,通过众包或LLM即评判系统收集的偏好数据更廉价且更易扩展,但在反映响应真实质量时往往存在偏差。我们将结合金标准与偏好数据的LLM路由器训练问题置于因果推断框架中,将响应评估机制视为处理分配。这一视角进一步揭示:偏好数据中的偏差对应于已知的因果估计量——条件平均处理效应。基于这一新视角,我们开发了一个集成式因果路由器训练框架,该框架能校正偏好数据偏差、解决两数据源间的失衡问题,并提升路由的鲁棒性与效率。数值实验表明,我们的方法能实现更精确的路由,并改善成本与质量间的权衡关系。