Gaussian process (GP) models have received increasing attention in recent years due to their superb prediction accuracy and modeling flexibility. To address the computational burdens of GP models for large-scale datasets, distributed learning for GPs are often adopted. Current aggregation models for distributed GPs is not time-efficient when incorporating correlations between GP experts. In this work, we propose a novel approach for aggregated prediction in distributed GPs. The technique is suitable for both the exact and sparse variational GPs. The proposed method incorporates correlations among experts, leading to better prediction accuracy with manageable computational requirements. As demonstrated by empirical studies, the proposed approach results in more stable predictions in less time than state-of-the-art consistent aggregation models.
翻译:近年来,高斯过程(GP)模型因其卓越的预测精度和建模灵活性而受到越来越多的关注。为应对GP模型处理大规模数据集时的计算负担,常采用分布式学习策略。现有的分布式GP聚合模型在纳入GP专家间的相关性时,时间效率不高。本文提出了一种用于分布式GP聚合预测的新方法。该技术同时适用于精确GP和稀疏变分GP。所提方法纳入了专家间的相关性,从而在可控的计算需求下实现了更好的预测精度。实证研究表明,与当前最先进的一致性聚合模型相比,所提方法能以更短的时间获得更稳定的预测结果。