Hypergraph learning with $p$-Laplacian regularization has attracted a lot of attention due to its flexibility in modeling higher-order relationships in data. This paper focuses on its fast numerical implementation, which is challenging due to the non-differentiability of the objective function and the non-uniqueness of the minimizer. We derive a hypergraph $p$-Laplacian equation from the subdifferential of the $p$-Laplacian regularization. A simplified equation that is mathematically well-posed and computationally efficient is proposed as an alternative. Numerical experiments verify that the simplified $p$-Laplacian equation suppresses spiky solutions in data interpolation and improves classification accuracy in semi-supervised learning. The remarkably low computational cost enables further applications.
翻译:基于$p$-拉普拉斯正则化的超图学习因其能灵活建模数据中的高阶关系而备受关注。本文重点研究其快速数值实现,该问题因目标函数的不可微性及极小值解的非唯一性而具有挑战性。我们从$p$-拉普拉斯正则化的次微分推导出超图$p$-拉普拉斯方程,并提出一个数学上适定且计算高效的简化方程作为替代方案。数值实验表明,简化$p$-拉普拉斯方程能有效抑制数据插值中的尖峰解,并在半监督学习中提升分类准确率。其显著降低的计算成本为更广泛的应用提供了可能。