Modelling uncertainty in Machine Learning models is essential for achieving safe and reliable predictions. Most research on uncertainty focuses on output uncertainty (predictions), but minimal attention is paid to uncertainty at inputs. We propose a method for propagating uncertainty in the inputs through a Neural Network that is simultaneously able to estimate input, data, and model uncertainty. Our results show that this propagation of input uncertainty results in a more stable decision boundary even under large amounts of input noise than comparatively simple Monte Carlo sampling. Additionally, we discuss and demonstrate that input uncertainty, when propagated through the model, results in model uncertainty at the outputs. The explicit incorporation of input uncertainty may be beneficial in situations where the amount of input uncertainty is known, though good datasets for this are still needed.
翻译:在机器学习模型中建模不确定性对于实现安全可靠的预测至关重要。现有不确定性研究多集中于输出不确定性(预测),而对输入不确定性的关注甚少。本文提出一种在神经网络中传播输入不确定性的方法,该方法能够同时估计输入、数据及模型不确定性。实验结果表明,与相对简单的蒙特卡洛采样相比,这种输入不确定性的传播即使在大量输入噪声下也能产生更稳定的决策边界。此外,我们通过论证表明:输入不确定性在模型中传播时,会在输出端转化为模型不确定性。在输入不确定度已知的场景中,显式纳入输入不确定性可能具有优势,尽管目前仍需要适用于此类场景的高质量数据集。