The reasoning large language model (RLLM) has been proven competitive in solving complex reasoning tasks such as mathematics, coding, compared to general LLM. However, the serving performance and behavior of RLLM remains unexplored, which may undermine the deployment and utilization of RLLM in real-world scenario. To close this gap, in this paper, we conduct a comprehensive study of RLLM service. We first perform a pilot study on comparing the serving performance between RLLM and traditional LLM and reveal that there are several distinct differences regarding serving behavior: (1) significant memory usage and fluctuations; (2) straggler requests; (3) adaptive running time; (4) domain preference. Then we further investigate whether existing inference optimization techniques are valid for RLLM. Our main takeaways are that model quantization methods and speculative decoding can improve service system efficiency with small compromise to RLLM accuracy, while prefix caching, KV cache quantization may even degrade accuracy or serving performance for small RLLM. Lastly, we conduct evaluation under real world workload modeled by Gamma distribution to verify our findings. Empirical results of real world workload evaluation across different dataset are aligned with our main findings regarding RLLM serving. We hope our work can provide the research community and industry with insights to advance RLLM inference serving.
翻译:相较于通用大语言模型,推理大语言模型已被证实在解决数学、编程等复杂推理任务上具有竞争力。然而,RLLM的服务性能与行为特性仍未得到充分探索,这可能阻碍其在真实场景中的部署与应用。为填补这一空白,本文对RLLM服务进行了全面研究。我们首先开展了一项对比RLLM与传统LLM服务性能的初步研究,揭示了其在服务行为上的几个显著差异:(1) 显著的内存使用量与波动;(2) 拖尾请求现象;(3) 自适应的运行时间;(4) 领域偏好。随后,我们进一步探究了现有推理优化技术对RLLM的有效性。我们的主要结论是:模型量化方法与推测解码能在对RLLM精度影响较小的情况下提升服务系统效率,而前缀缓存、KV缓存量化则可能损害小型RLLM的精度或服务性能。最后,我们基于Gamma分布建模的真实工作负载进行评估以验证发现。在不同数据集上的真实工作负载评估结果均与关于RLLM服务的主要发现一致。我们希望本工作能为研究界与产业界提供洞见,以推动RLLM推理服务的发展。