As Generative AI (GenAI), particularly inference, rapidly emerges as a dominant workload category, the Kubernetes ecosystem is proactively evolving to natively support its unique demands. This industry paper demonstrates how emerging Kubernetes-native projects can be combined to deliver the benefits of container orchestration, such as scalability and resource efficiency, to complex AI workflows. We implement and evaluate an illustrative, multi-stage use case consisting of automatic speech recognition and summarization. First, we address batch inference by using Kueue to manage jobs that transcribe audio files with Whisper models and Dynamic Accelerator Slicer (DAS) to increase parallel job execution. Second, we address a discrete online inference scenario by feeding the transcripts to a Large Language Model for summarization hosted using llm-d, a novel solution utilizing the recent developments around the Kubernetes Gateway API Inference Extension (GAIE) for optimized routing of inference requests. Our findings illustrate that these complementary components (Kueue, DAS, and GAIE) form a cohesive, high-performance platform, proving Kubernetes' capability to serve as a unified foundation for demanding GenAI workloads: Kueue reduced total makespan by up to 15%; DAS shortened mean job completion time by 36%; and GAIE improved Time to First Token by 82\%.
翻译:随着生成式人工智能(GenAI),特别是推理任务,迅速成为主导性工作负载类别,Kubernetes生态系统正积极演进以原生支持其独特需求。本行业论文展示了如何将新兴的Kubernetes原生项目相结合,从而为复杂AI工作流提供容器编排的优势,如可扩展性和资源效率。我们实现并评估了一个由自动语音识别与摘要生成组成的多阶段示例用例。首先,我们通过使用Kueue管理基于Whisper模型转录音频文件的作业,并利用动态加速器切片器(DAS)提升并行作业执行效率,以应对批量推理场景。其次,我们通过将转录文本输入大型语言模型进行摘要生成,采用基于Kubernetes网关API推理扩展(GAIE)最新技术开发的创新解决方案llm-d托管模型,以处理离散的在线推理场景并优化推理请求路由。研究结果表明,这些互补组件(Kueue、DAS与GAIE)构成了一个连贯的高性能平台,证实了Kubernetes能够作为高要求GenAI工作负载的统一基础:Kueue将总完工时间缩短达15%;DAS使平均作业完成时间减少36%;GAIE则将首令牌生成时间提升82%。