In this paper, we propose a framework to enhance the robustness of the neural models by mitigating the effects of process-induced and aging-related variations of analog computing components on the accuracy of the analog neural networks. We model these variations as the noise affecting the precision of the activations and introduce a denoising block inserted between selected layers of a pre-trained model. We demonstrate that training the denoising block significantly increases the model's robustness against various noise levels. To minimize the overhead associated with adding these blocks, we present an exploration algorithm to identify optimal insertion points for the denoising blocks. Additionally, we propose a specialized architecture to efficiently execute the denoising blocks, which can be integrated into mixed-signal accelerators. We evaluate the effectiveness of our approach using Deep Neural Network (DNN) models trained on the ImageNet and CIFAR-10 datasets. The results show that on average, by accepting 2.03% parameter count overhead, the accuracy drop due to the variations reduces from 31.7% to 1.15%.
翻译:本文提出一种框架,通过减轻模拟计算组件中工艺偏差和老化相关变异对模拟神经网络精度的影响,以增强神经模型的鲁棒性。我们将这些变异建模为影响激活值精度的噪声,并在预训练模型的选定层间插入去噪模块。实验证明,训练该去噪模块能显著提升模型在不同噪声水平下的鲁棒性。为最小化添加该模块带来的开销,我们提出一种探索算法以确定去噪模块的最佳插入位置。此外,我们设计了一种可集成至混合信号加速器的专用架构,以高效执行去噪运算。我们使用在ImageNet和CIFAR-10数据集上训练的深度神经网络模型评估了所提方法的有效性。结果表明:平均在仅增加2.03%参数量的情况下,由器件变异引起的精度下降从31.7%降低至1.15%。