Transformer models have achieved profound success in prediction tasks in a wide range of applications in natural language processing, speech recognition and computer vision. Extending Transformer's success to safety-critical domains requires calibrated uncertainty estimation which remains under-explored. To address this, we propose Sparse Gaussian Process attention (SGPA), which performs Bayesian inference directly in the output space of multi-head attention blocks (MHAs) in transformer to calibrate its uncertainty. It replaces the scaled dot-product operation with a valid symmetric kernel and uses sparse Gaussian processes (SGP) techniques to approximate the posterior processes of MHA outputs. Empirically, on a suite of prediction tasks on text, images and graphs, SGPA-based Transformers achieve competitive predictive accuracy, while noticeably improving both in-distribution calibration and out-of-distribution robustness and detection.
翻译:Transformer模型在自然语言处理、语音识别和计算机视觉等广泛领域的预测任务中取得了显著成功。将Transformer的成功扩展到安全关键领域需要经过校准的不确定性估计,而这方面研究仍显不足。为此,我们提出稀疏高斯过程注意力机制(SGPA),该方法直接在Transformer多头注意力模块的输出空间进行贝叶斯推断,以校准其不确定性。SGPA使用有效的对称核函数替代原有的缩放点积运算,并采用稀疏高斯过程技术来近似多头注意力输出的后验过程。实验表明,在文本、图像和图结构的一系列预测任务中,基于SGPA的Transformer模型在保持竞争力的预测准确性的同时,显著提升了分布内校准效果以及分布外鲁棒性与异常检测能力。