Deploying Vision Transformers on edge devices is challenging due to their high computational complexity, while full offloading to cloud resources presents significant latency overheads. We propose a novel collaborative inference framework, which orchestrates a lightweight generalist ViT on an edge device and multiple medium-sized expert ViTs on a near-edge accelerator. A novel routing mechanism uses the edge model's Top-$\mathit{k}$ predictions to dynamically select the most relevant expert for samples with low confidence. We further design a progressive specialist training strategy to enhance expert accuracy on dataset subsets. Extensive experiments on the CIFAR-100 dataset using a real-world edge and near-edge testbed demonstrate the superiority of our framework. Specifically, the proposed training strategy improves expert specialization accuracy by 4.12% on target subsets and enhances overall accuracy by 2.76% over static experts. Moreover, our method reduces latency by up to 45% compared to edge execution, and energy consumption by up to 46% compared to just near-edge offload.
翻译:在边缘设备上部署视觉Transformer模型面临高计算复杂度的挑战,而完全卸载至云端资源则会产生显著的延迟开销。本文提出一种新颖的协同推理框架,通过在边缘设备上部署轻量级通用ViT模型,同时在近边缘加速器上配置多个中等规模的专家ViT模型来实现协同推理。我们设计了一种新型路由机制,利用边缘模型的Top-$\mathit{k}$预测结果,对置信度较低的样本动态选择最相关的专家模型进行处理。为进一步提升专家模型在数据集子集上的准确性,我们提出渐进式专家训练策略。基于CIFAR-100数据集,在真实边缘与近边缘测试平台上进行的广泛实验验证了本框架的优越性。具体而言,所提出的训练策略使目标子集上的专家专业化准确率提升4.12%,整体准确率较静态专家模型提高2.76%。此外,与纯边缘执行方案相比,本方法延迟降低最高达45%;与纯近边缘卸载方案相比,能耗降低最高达46%。