The growing demand for real-time DNN applications on edge devices necessitates faster inference of increasingly complex models. Although many devices include specialized accelerators (e.g., mobile GPUs), dynamic control-flow operators and unsupported kernels often fall back to CPU execution. Existing frameworks handle these fallbacks poorly, leaving CPU cores idle and causing high latency and memory spikes. We introduce Parallax, a framework that accelerates mobile DNN inference without model refactoring or custom operator implementations. Parallax first partitions the computation DAG to expose parallelism, then employs branch-aware memory management with dedicated arenas and buffer reuse to reduce runtime footprint. An adaptive scheduler executes branches according to device memory constraints, meanwhile, fine-grained subgraph control enables heterogeneous inference of dynamic models. By evaluating on five representative DNNs across three different mobile devices, Parallax achieves up to 46% latency reduction, maintains controlled memory overhead (26.5% on average), and delivers up to 30% energy savings compared with state-of-the-art frameworks, offering improvements aligned with the responsiveness demands of real-time mobile inference.
翻译:边缘设备上实时深度神经网络应用需求的增长,要求对日益复杂的模型进行更快速的推理。尽管许多设备配备了专用加速器(如移动GPU),但动态控制流算子和不受支持的内核通常仍需回退至CPU执行。现有框架对这些回退处理不佳,导致CPU核心闲置,引发高延迟和内存峰值。本文提出Parallax框架,该框架无需模型重构或自定义算子实现即可加速移动端DNN推理。Parallax首先对计算DAG进行分区以暴露并行性,随后采用分支感知内存管理技术,通过专用内存区域和缓冲区重用来降低运行时内存占用。自适应调度器根据设备内存约束执行分支计算,同时细粒度子图控制实现了动态模型的异构推理。通过在三种不同移动设备上对五个代表性DNN模型进行评估,Parallax相比最先进框架实现了高达46%的延迟降低,维持了可控的内存开销(平均26.5%),并带来高达30%的能耗节省,其改进效果符合实时移动推理对响应性的需求。