Flexible and scalable decentralized learning solutions are fundamentally important in the application of multi-agent systems. While several recent approaches introduce (ensembles of) kernel machines in the distributed setting, Bayesian solutions are much more limited. We introduce a fully decentralized, asymptotically exact solution to computing the random feature approximation of Gaussian processes. We further address the choice of hyperparameters by introducing an ensembling scheme for Bayesian multiple kernel learning based on online Bayesian model averaging. The resulting algorithm is tested against Bayesian and frequentist methods on simulated and real-world datasets.
翻译:灵活且可扩展的去中心化学习方案在多智能体系统应用中具有根本重要性。虽然近期若干方法在分布式场景中引入了(集成)核机,但贝叶斯解决方案则更为有限。我们提出了一种完全去中心化、渐近精确的计算高斯过程随机特征近似的方法。我们进一步通过引入基于在线贝叶斯模型平均的贝叶斯多核学习集成方案,解决了超参数选择问题。该算法在模拟和真实数据集上分别与贝叶斯方法和频率主义方法进行了对比测试。