Hypergraphs provide a natural framework for modeling higher-order interactions, yet their theoretical underpinnings in semi-supervised learning remain limited. We provide an asymptotic consistency analysis of variational learning on random geometric hypergraphs, precisely characterizing the conditions ensuring the well-posedness of hypergraph learning as well as showing convergence to a weighted $p$-Laplacian equation. Motivated by this, we propose Higher-Order Hypergraph Learning (HOHL), which regularizes via powers of Laplacians from skeleton graphs for multiscale smoothness. HOHL converges to a higher-order Sobolev seminorm. Empirically, it performs strongly on standard baselines.
翻译:超图为建模高阶交互提供了自然框架,但其在半监督学习中的理论基础仍较为有限。本文对随机几何超图上的变分学习进行了渐近一致性分析,精确刻画了确保超图学习适定性的条件,并证明了其收敛于加权$p$-拉普拉斯方程。基于此,我们提出了高阶超图学习(HOHL)方法,该方法通过骨架图的拉普拉斯算子幂次进行多尺度平滑正则化。HOHL收敛于高阶索伯列夫半范数。实验表明,该方法在标准基准数据集上表现优异。