IoT applications are increasingly relying on on-device AI accelerators to ensure high performance, especially in limited connectivity and safety-critical scenarios. However, the limited on-chip memory of these accelerators forces inference runtimes to swap model segments between host and accelerator memory, substantially inflating latency. While collaborative processing by partitioning the model processing between CPU and accelerator resources can reduce accelerator memory pressure and latency, naive partitioning may worsen end-to-end latency by either shifting excessive computation to the CPU or failing to sufficiently curb swapping, a problem that is further amplified in multi-tenant and dynamic environments. To address these issues, we present SwapLess, a system for adaptive, multi-tenant TPU-CPU collaborative inference for memory-constrained Edge TPUs. SwapLess utilizes an analytic queueing model that captures partition-dependent CPU/TPU service times as well as inter- and intra-model swapping overheads across different workload mixes and request rates. Using this model, SwapLess continuously adjusts both the partition point and CPU core allocation online to minimize end-to-end response time with low decision overhead. An implementation on Edge TPU-equipped platforms demonstrates that SwapLess reduces mean latency by up to 63.8% for single-tenant workloads and up to 77.4% for multi-tenant workloads relative to the default Edge TPU compiler.
翻译:物联网应用日益依赖设备端AI加速器来确保高性能,尤其在连接受限和安全关键场景中。然而,这些加速器有限的片上内存迫使推理运行时在主机内存与加速器内存之间交换模型分段,从而显著增加延迟。虽然通过CPU与加速器资源间划分模型处理的协同处理方式能够降低加速器内存压力与延迟,但简单的划分可能因将过多计算转移至CPU或未能充分抑制交换而恶化端到端延迟,该问题在多租户动态环境中会进一步放大。为解决这些问题,我们提出SwapLess——一个面向内存受限边缘TPU的自适应多租户TPU-CPU协同推理系统。SwapLess采用解析排队模型,该模型能捕获划分相关的CPU/TPU服务时间,以及不同工作负载混合与请求速率下的模型间与模型内交换开销。基于此模型,SwapLess以较低决策开销在线持续调整划分点与CPU核心分配,从而最小化端到端响应时间。在配备边缘TPU的平台上的实现表明,相较于默认边缘TPU编译器,SwapLess将单租户工作负载的平均延迟降低达63.8%,多租户工作负载降低达77.4%。