The PAC-Bayesian framework has significantly advanced the understanding of statistical learning, particularly for majority voting methods. Despite its successes, its application to multi-view learning -- a setting with multiple complementary data representations -- remains underexplored. In this work, we extend PAC-Bayesian theory to multi-view learning, introducing novel generalization bounds based on R\'enyi divergence. These bounds provide an alternative to traditional Kullback-Leibler divergence-based counterparts, leveraging the flexibility of R\'enyi divergence. Furthermore, we propose first- and second-order oracle PAC-Bayesian bounds and extend the C-bound to multi-view settings. To bridge theory and practice, we design efficient self-bounding optimization algorithms that align with our theoretical results.
翻译:PAC-Bayesian 框架显著推进了对统计学习,特别是多数投票方法的理解。尽管取得了成功,其在多视角学习(一种具有多个互补数据表示的场景)中的应用仍未得到充分探索。在本工作中,我们将 PAC-Bayesian 理论扩展到多视角学习,引入了基于 Rényi 散度的新颖泛化边界。这些边界为传统的基于 Kullback-Leibler 散度的对应方法提供了替代方案,利用了 Rényi 散度的灵活性。此外,我们提出了一阶和二阶 Oracle PAC-Bayesian 边界,并将 C-bound 扩展到多视角设置。为了连接理论与实践,我们设计了与理论结果相一致的高效自边界优化算法。