The Multi-Output Gaussian Process is is a popular tool for modelling data from multiple sources. A typical choice to build a covariance function for a MOGP is the Linear Model of Coregionalization (LMC) which parametrically models the covariance between outputs. The Latent Variable MOGP (LV-MOGP) generalises this idea by modelling the covariance between outputs using a kernel applied to latent variables, one per output, leading to a flexible MOGP model that allows efficient generalization to new outputs with few data points. Computational complexity in LV-MOGP grows linearly with the number of outputs, which makes it unsuitable for problems with a large number of outputs. In this paper, we propose a stochastic variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs, making computational complexity per training iteration independent of the number of outputs.
翻译:多输出高斯过程是建模多源数据的常用工具。构建多输出高斯过程协方差函数的典型选择是线性区域化模型,该模型通过参数化方式建模输出间的协方差关系。隐变量多输出高斯过程通过将核函数应用于每个输出对应的隐变量来建模输出间协方差,从而推广了这一思想,形成了一种灵活的多输出高斯过程模型,该模型能够以少量数据点高效泛化至新输出。隐变量多输出高斯过程的计算复杂度随输出数量线性增长,使其不适用于输出数量庞大的问题。本文针对隐变量多输出高斯过程提出一种随机变分推断方法,该方法允许对输入和输出同时进行小批量处理,使得每次训练迭代的计算复杂度与输出数量无关。