Multimodal large language model (MLLM) inference splits into two phases with opposing hardware demands: vision encoding is compute-bound, while language generation is memory-bandwidth-bound. We show that under standard transformer KV caching, the modality boundary (between vision encoder and language model) minimizes cross-device transfer among all partition points that preserve standard stage-based execution. Partitioning here reduces transfer complexity from $O(L * s_ctx)$ bytes (GB-scale KV caches under stage-level disaggregation) to $O(N_v * d)$ bytes (MB-scale embeddings), an O(L) reduction where L is the transformer depth. The result holds across attention mechanisms (MHA/GQA), dynamic vision resolutions, and model scales, and the advantage grows as models deepen. A direct implication is that existing stage-level disaggregation systems are constrained to high-bandwidth interconnects (e.g., NVLink), whereas modality-level disaggregation enables cross-tier heterogeneous serving over commodity PCIe. A closed-form cost model shows that heterogeneous deployment is cost-optimal under phase-separable workloads (predicts 31.4% savings; observed 40.6%). We build HeteroServe, a phase-aware runtime with modality-level partitioning and cross-tier scheduling, and evaluate it on LLaVA-1.5-7B and Qwen2.5-VL against vLLM v0.3.0. On identical 4xA100 hardware, engine optimizations raise throughput by up to 54%. Under a fixed budget, a heterogeneous cluster (\$38k) improves Tokens/\$ by 37% over a homogeneous baseline (\$64k) without degrading latency.
翻译:多模态大语言模型(MLLM)的推理过程可分为两个硬件需求相反的阶段:视觉编码是计算密集型任务,而语言生成则是内存带宽密集型任务。我们证明,在标准的Transformer KV缓存机制下,模态边界(视觉编码器与语言模型之间)在所有保持标准分阶段执行的划分点中,能够最小化跨设备数据传输。在此处进行划分可将传输复杂度从$O(L * s_ctx)$字节(阶段级解耦下GB级别的KV缓存)降低至$O(N_v * d)$字节(MB级别的嵌入向量),实现了O(L)倍的降低,其中L为Transformer的深度。这一结论适用于不同的注意力机制(MHA/GQA)、动态视觉分辨率及模型规模,且随着模型深度增加,其优势愈发显著。一个直接推论是:现有的阶段级解耦系统受限于高带宽互连(如NVLink),而模态级解耦使得在商用PCIe上实现跨层级异构服务成为可能。一个闭式成本模型表明,在阶段可分离的工作负载下,异构部署是成本最优的(预测节省31.4%;实测节省40.6%)。我们构建了HeteroServe,这是一个具备模态级划分与跨层级调度能力的阶段感知运行时系统,并在LLaVA-1.5-7B和Qwen2.5-VL模型上,与vLLM v0.3.0进行了对比评估。在相同的4xA100硬件上,引擎优化使吞吐量最高提升54%。在固定预算下,异构集群(3.8万美元)相较于同构基准集群(6.4万美元),在未降低延迟的情况下,实现了37%的Tokens/美元效率提升。