Existing regression models tend to fall short in both accuracy and uncertainty estimation when the label distribution is imbalanced. In this paper, we propose a probabilistic deep learning model, dubbed variational imbalanced regression (VIR), which not only performs well in imbalanced regression but naturally produces reasonable uncertainty estimation as a byproduct. Different from typical variational autoencoders assuming I.I.D. representations (a data point's representation is not directly affected by other data points), our VIR borrows data with similar regression labels to compute the latent representation's variational distribution; furthermore, different from deterministic regression models producing point estimates, VIR predicts the entire normal-inverse-gamma distributions and modulates the associated conjugate distributions to impose probabilistic reweighting on the imbalanced data, thereby providing better uncertainty estimation. Experiments in several real-world datasets show that our VIR can outperform state-of-the-art imbalanced regression models in terms of both accuracy and uncertainty estimation. Code will soon be available at https://github.com/Wang-ML-Lab/variational-imbalanced-regression.
翻译:现有回归模型在标签分布不平衡时,往往在准确性和不确定性估计方面均存在不足。本文提出一种概率深度学习模型——变分不平衡回归(VIR),该模型不仅在不平衡回归任务中表现优异,还能自然地生成合理的不确定性估计作为副产品。与典型变分自编码器假设独立同分布表示(数据点的表示不受其他数据点直接影响)不同,我们的VIR通过借用具有相似回归标签的数据来计算潜在表示的变分分布;此外,与生成点估计的确定性回归模型不同,VIR预测完整的正态-逆伽马分布,并通过调制相关共轭分布对不平衡数据施加概率重加权,从而提供更优的不确定性估计。在多个真实数据集上的实验表明,VIR在准确性和不确定性估计方面均优于当前最先进的不平衡回归模型。代码即将发布于 https://github.com/Wang-ML-Lab/variational-imbalanced-regression。