As Generative AI (GenAI), particularly inference, rapidly emerges as a dominant workload category, the Kubernetes ecosystem is proactively evolving to natively support its unique demands. This industry paper demonstrates how emerging Kubernetes-native projects can be combined to deliver the benefits of container orchestration, such as scalability and resource efficiency, to complex AI workflows. We implement and evaluate an illustrative, multi-stage use case consisting of automatic speech recognition and summarization. First, we address batch inference by using Kueue to manage jobs that transcribe audio files with Whisper models and Dynamic Accelerator Slicer (DAS) to increase parallel job execution. Second, we address a discrete online inference scenario by feeding the transcripts to a Large Language Model for summarization hosted using llm-d, a novel solution utilizing the recent developments around the Kubernetes Gateway API Inference Extension (GAIE) for optimized routing of inference requests. Our findings illustrate that these complementary components (Kueue, DAS, and GAIE) form a cohesive, high-performance platform, proving Kubernetes' capability to serve as a unified foundation for demanding GenAI workloads: Kueue reduced total makespan by up to 15%; DAS shortened mean job completion time by 36%; and GAIE improved Time to First Token by 82\%.
翻译:随着生成式人工智能(GenAI),特别是推理任务,迅速成为主导性工作负载类别,Kubernetes生态系统正积极演进以原生支持其独特需求。本行业论文展示了如何将新兴的Kubernetes原生项目相结合,从而为复杂AI工作流提供容器编排的优势,如可扩展性和资源效率。我们实现并评估了一个由自动语音识别和摘要组成的多阶段示例用例。首先,我们通过使用Kueue管理使用Whisper模型转录音频文件的任务,并利用动态加速器切片器(DAS)增加并行任务执行,以处理批量推理。其次,我们通过将转录文本输入一个大型语言模型进行摘要,该模型使用llm-d托管,这是一种利用Kubernetes网关API推理扩展(GAIE)近期发展成果的新型解决方案,用于优化推理请求的路由,从而处理离散的在线推理场景。我们的研究结果表明,这些互补组件(Kueue、DAS和GAIE)构成了一个统一的高性能平台,证明了Kubernetes有能力作为苛刻GenAI工作负载的统一基础:Kueue将总完工时间缩短了高达15%;DAS将平均任务完成时间减少了36%;GAIE将首令牌生成时间提升了82%。