In recent years, the hardware implementation of neural networks, leveraging physical coupling and analog neurons has substantially increased in relevance. Such nonlinear and complex physical networks provide significant advantages in speed and energy efficiency, but are potentially susceptible to internal noise when compared to digital emulations of such networks. In this work, we consider how additive and multiplicative Gaussian white noise on the neuronal level can affect the accuracy of the network when applied for specific tasks and including a softmax function in the readout layer. We adapt several noise reduction techniques to the essential setting of classification tasks, which represent a large fraction of neural network computing. We find that these adjusted concepts are highly effective in mitigating the detrimental impact of noise.
翻译:近年来,利用物理耦合和模拟神经元的神经网络硬件实现的重要性显著提升。此类非线性且复杂的物理网络在速度和能效方面具有显著优势,但与这些网络的数字模拟相比,可能更容易受到内部噪声的影响。在本工作中,我们研究了神经元层面的加性和乘性高斯白噪声如何影响网络在执行特定任务时的准确性,其中输出层包含softmax函数。我们将多种降噪技术应用于分类任务这一核心场景,该场景在神经网络计算中占据很大比例。我们发现,这些经过调整的概念在减轻噪声的有害影响方面非常有效。