Accurately modeling the correlation structure of errors is critical for reliable uncertainty quantification in probabilistic time series forecasting. While recent deep learning models for multivariate time series have developed efficient parameterizations for time-varying contemporaneous covariance, but they often assume temporal independence of errors for simplicity. However, real-world data often exhibit significant error autocorrelation and cross-lag correlation due to factors such as missing covariates. In this paper, we introduce a plug-and-play method that learns the covariance structure of errors over multiple steps for autoregressive models with Gaussian-distributed errors. To ensure scalable inference and computational efficiency, we model the contemporaneous covariance using a low-rank-plus-diagonal parameterization and capture cross-covariance through a group of independent latent temporal processes. The learned covariance matrix is then used to calibrate predictions based on observed residuals. We evaluate our method on probabilistic models built on RNNs and Transformer architectures, and the results confirm the effectiveness of our approach in improving predictive accuracy and uncertainty quantification without significantly increasing the parameter size.
翻译:在概率时间序列预测中,准确建模误差的相关结构对于可靠的不确定性量化至关重要。尽管近期针对多元时间序列的深度学习模型已发展出对时变同期协方差的高效参数化方法,但为简化计算常假设误差在时间维度上相互独立。然而,由于缺失协变量等因素,真实数据往往表现出显著的误差自相关与跨时滞相关性。本文提出一种即插即用方法,用于学习高斯分布误差自回归模型中多步预测误差的协方差结构。为实现可扩展推断与计算效率,我们采用低秩加对角参数化对同期协方差建模,并通过一组独立的潜在时间过程捕捉跨时滞协方差。学习得到的协方差矩阵随后用于基于观测残差校准预测结果。我们在基于RNN和Transformer架构的概率模型上评估了该方法,实验结果表明:在不显著增加参数量的前提下,本方法能有效提升预测精度与不确定性量化能力。