The rapid growth of microcontroller-based IoT devices has opened up numerous applications, from smart manufacturing to personalized healthcare. Despite the widespread adoption of energy-efficient microcontroller units (MCUs) in the Tiny Machine Learning (TinyML) domain, they still face significant limitations in terms of performance and memory (RAM, Flash). In this work, we combine approximate computing and software kernel design to accelerate the inference of approximate CNN models on MCUs. Our kernel-based approximation framework firstly unpacks the operands of each convolution layer and then conducts an offline calculation to determine the significance of each operand. Subsequently, through a design space exploration, it employs a computation skipping approximation strategy based on the calculated significance. Our evaluation on an STM32-Nucleo board and 2 popular CNNs trained on the CIFAR-10 dataset shows that, compared to state-of-the-art exact inference, our Pareto optimal solutions can feature on average 21% latency reduction with no degradation in Top-1 classification accuracy, while for lower accuracy requirements, the corresponding reduction becomes even more pronounced.
翻译:基于微控制器的物联网设备快速增长,开辟了从智能制造到个性化医疗的众多应用领域。尽管在微型机器学习(TinyML)领域已广泛采用高能效微控制器单元(MCU),但其在性能和存储器(RAM、Flash)方面仍面临显著限制。本研究结合近似计算与软件核设计,以加速MCU上近似CNN模型的推理。我们的基于核的近似框架首先解包每个卷积层的操作数,随后通过离线计算确定各操作数的重要性。继而通过设计空间探索,该框架采用基于计算所得重要性的计算跳过近似策略。在STM32-Nucleo开发板及两个基于CIFAR-10数据集训练的流行CNN上的评估表明:相较于最先进的精确推理方法,我们的帕累托最优解在保持Top-1分类精度不下降的前提下,平均可实现21%的延迟降低;而对于较低精度要求,相应的降低效果更为显著。