The PAC-Bayesian framework has significantly advanced our understanding of statistical learning, particularly in majority voting methods. However, its application to multi-view learning remains underexplored. In this paper, we extend PAC-Bayesian theory to the multi-view setting, introducing novel PAC-Bayesian bounds based on R\'enyi divergence. These bounds improve upon traditional Kullback-Leibler divergence and offer more refined complexity measures. We further propose first and second-order oracle PAC-Bayesian bounds, along with an extension of the C-bound for multi-view learning. To ensure practical applicability, we develop efficient optimization algorithms with self-bounding properties.
翻译:PAC-Bayesian框架极大地增进了我们对统计学习,特别是多数投票方法的理解。然而,其在多视角学习中的应用仍未被充分探索。本文将该理论扩展至多视角场景,引入了基于Rényi散度的新型PAC-Bayesian界。这些界改进了传统的Kullback-Leibler散度,并提供了更精细的复杂度度量。我们进一步提出了一阶与二阶Oracle PAC-Bayesian界,以及针对多视角学习的C界扩展。为确保实际适用性,我们开发了具有自约束特性的高效优化算法。