Large Language Models (LLMs) have experienced widespread adoption across scientific and industrial domains due to their versatility and utility for diverse tasks. Nevertheless, deploying and serving these models at scale with optimal throughput and latency remains a significant challenge, primarily because of the high computational and memory demands associated with LLMs. To tackle this limitation, we introduce Expert Router, a system designed to orchestrate multiple expert models efficiently, thereby enhancing scalability. Expert Router is a parallel inference system with a central routing gateway that distributes incoming requests using a clustering method. This approach effectively partitions incoming requests among available LLMs, maximizing overall throughput. Our extensive evaluations encompassed up to 1,000 concurrent users, providing comprehensive insights into the system's behavior from user and infrastructure perspectives. The results demonstrate Expert Router's effectiveness in handling high-load scenarios and achieving higher throughput rates, particularly under many concurrent users.
翻译:大型语言模型(LLMs)因其在不同任务中的通用性和实用性,已在科学和工业领域得到广泛采用。然而,在保证最优吞吐量和延迟的前提下大规模部署和服务这些模型仍是一项重大挑战,这主要源于LLMs高昂的计算和内存需求。为解决这一局限,我们提出了专家路由器(Expert Router),一种旨在高效协调多个专家模型的系统,从而增强可扩展性。专家路由器是一个并行推理系统,配备中央路由网关,通过聚类方法分发传入请求。该方法有效将传入请求分配到可用的LLMs中,最大化整体吞吐量。我们的广泛评估涵盖多达1000个并发用户,从用户和基础设施角度提供了对系统行为的全面洞察。结果表明,专家路由器在处理高负载场景和实现更高吞吐率方面表现出色,尤其在大量并发用户情况下。