Bayesian Neural Networks (BayNNs) naturally provide uncertainty in their predictions, making them a suitable choice in safety-critical applications. Additionally, their realization using memristor-based in-memory computing (IMC) architectures enables them for resource-constrained edge applications. In addition to predictive uncertainty, however, the ability to be inherently robust to noise in computation is also essential to ensure functional safety. In particular, memristor-based IMCs are susceptible to various sources of non-idealities such as manufacturing and runtime variations, drift, and failure, which can significantly reduce inference accuracy. In this paper, we propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in IMC architectures. To achieve this, we introduce a novel normalization layer combined with stochastic affine transformations. Empirical results in various benchmark datasets show a graceful degradation in inference accuracy, with an improvement of up to $58.11\%$.
翻译:贝叶斯神经网络(BayNNs)天然具备预测不确定性,使其成为安全关键应用的理想选择。此外,采用基于忆阻器的存内计算(IMC)架构实现BayNNs,可将其应用于资源受限的边缘场景。然而除了预测不确定性外,对计算噪声具有本质鲁棒性的能力对于确保功能安全同样至关重要。特别是基于忆阻器的IMC容易受到制造偏差、运行时波动、漂移和故障等多种非理想因素影响,这些因素会显著降低推理精度。本文提出一种内在增强部署于IMC架构的BayNNs鲁棒性与推理精度的方法。为此,我们引入一种结合随机仿射变换的新型归一化层。在多个基准数据集上的实验结果表明,推理精度呈优雅退化特性,最高可实现$58.11\%$的提升。