Few-shot node classification on hypergraphs requires models that generalize from scarce labels while capturing high-order structures. Existing hypergraph neural networks (HNNs) effectively encode such structures but often suffer from overfitting and scalability issues due to complex, black-box architectures. In this work, we propose ZEN (Zero-Parameter Hypergraph Neural Network), a fully linear and parameter-free model that achieves both expressiveness and efficiency. Built upon a unified formulation of linearized HNNs, ZEN introduces a tractable closed-form solution for the weight matrix and a redundancy-aware propagation scheme to avoid iterative training and to eliminate redundant self information. On 11 real-world hypergraph benchmarks, ZEN consistently outperforms eight baseline models in classification accuracy while achieving up to 696x speedups over the fastest competitor. Moreover, the decision process of ZEN is fully interpretable, providing insights into the characteristic of a dataset. Our code and datasets are fully available at https://github.com/chaewoonbae/ZEN.
翻译:超图上的少样本节点分类要求模型能够在捕获高阶结构的同时从稀疏标签中泛化。现有的超图神经网络(HNNs)虽能有效编码此类结构,但由于其复杂、黑盒的架构,常面临过拟合和可扩展性问题。本文提出ZEN(零参数超图神经网络),一种完全线性且参数无关的模型,兼具表达能力和高效性。基于线性化HNNs的统一公式,ZEN引入了权重矩阵的可处理闭式解以及冗余感知传播方案,从而避免了迭代训练并消除了冗余的自身信息。在11个真实世界超图基准测试中,ZEN在分类准确率上持续优于八个基线模型,同时相较于最快的竞争对手实现了高达696倍的加速。此外,ZEN的决策过程完全可解释,为数据集特性提供了深入洞察。我们的代码和数据集已在https://github.com/chaewoonbae/ZEN完全公开。