Depthwise separable convolutions are a fundamental component in efficient Deep Neural Networks, as they reduce the number of parameters and operations compared to traditional convolutions while maintaining comparable accuracy. However, their low data reuse opportunities make deploying them notoriously difficult. In this work, we perform an extensive exploration of alternatives to fuse the depthwise and pointwise kernels that constitute the separable convolutional block. Our approach aims to minimize time-consuming memory transfers by combining different data layouts. When targeting a commercial ultra-low-power device with a three-level memory hierarchy, the GreenWaves GAP8 SoC, we reduce the latency of end-to-end network execution by up to 11.40%. Furthermore, our kernels reduce activation data movements between L2 and L1 memories by up to 52.97%.
翻译:深度可分离卷积是高效深度神经网络的基础组件,与传统卷积相比,它在保持相当精度的同时减少了参数数量和运算量。然而,其数据复用机会较低,导致部署尤为困难。本研究对融合构成可分离卷积块的深度卷积核与逐点卷积核的替代方案进行了广泛探索。我们的方法旨在通过组合不同的数据布局来最小化耗时的内存传输。针对具有三级内存层次结构的商用超低功耗设备——GreenWaves GAP8 SoC,我们将端到端网络执行的延迟降低了最高11.40%。此外,我们的卷积核将L2与L1内存间的激活数据传输量减少了最高52.97%。