We develop new uncertainty propagation methods for feed-forward neural network architectures with leaky ReLu activation functions subject to random perturbations in the input vectors. In particular, we derive analytical expressions for the probability density function (PDF) of the neural network output and its statistical moments as a function of the input uncertainty and the parameters of the network, i.e., weights and biases. A key finding is that an appropriate linearization of the leaky ReLu activation function yields accurate statistical results even for large perturbations in the input vectors. This can be attributed to the way information propagates through the network. We also propose new analytically tractable Gaussian copula surrogate models to approximate the full joint PDF of the neural network output. To validate our theorical results, we conduct Monte Carlo simulations and a thorough error analysis on a multi-layer neural network representing a nonlinear integro-differential operator between two polynomial function spaces. Our findings demonstrate excellent agreement between the theoretical predictions and Monte Carlo simulations.
翻译:针对输入向量存在随机扰动的前馈神经网络架构,我们开发了新的不确定性传播方法,该架构采用Leaky ReLU激活函数。具体而言,我们推导了神经网络输出概率密度函数(PDF)及其统计矩的解析表达式,这些表达式是输入不确定性及网络参数(即权重和偏置)的函数。关键发现是:即使输入向量存在较大扰动,对Leaky ReLU激活函数进行适当线性化仍能获得精确的统计结果,这归因于信息在神经网络中的传播方式。我们还提出了新的解析可处理高斯Copula代理模型,用以近似神经网络输出的完整联合PDF。为验证理论结果,我们在代表两个多项式函数空间之间非线性积分-微分算子的多层神经网络上进行了蒙特卡洛模拟和全面的误差分析。研究结果表明理论预测与蒙特卡洛模拟具有极好的一致性。