This paper explores sparsification methods as a form of regularization in Graph Neural Networks (GNNs) to address high memory usage and computational costs in large-scale graph applications. Using techniques from Network Science and Machine Learning, including Erdős-Rényi for model sparsification, we enhance the efficiency of GNNs for real-world applications. We demonstrate our approach on N-1 contingency assessment in electrical grids, a critical task for ensuring grid reliability. We apply our methods to three datasets of varying sizes, exploring Graph Convolutional Networks (GCN) and Graph Isomorphism Networks (GIN) with different degrees of sparsification and rewiring. Comparison across sparsification levels shows the potential of combining insights from both research fields to improve GNN performance and scalability. Our experiments highlight the importance of tuning sparsity parameters: while sparsity can improve generalization, excessive sparsity may hinder learning of complex patterns. Our adaptive rewiring approach, particularly when combined with early stopping, proves promising by allowing the model to adapt its connectivity structure during training. This research contributes to understanding how sparsity can be effectively leveraged in GNNs for critical applications like power grid reliability analysis.
翻译:本文探讨了将稀疏化方法作为图神经网络(GNN)中一种正则化形式,以应对大规模图应用中的高内存使用和计算成本问题。我们利用来自网络科学和机器学习的技术(包括用于模型稀疏化的Erdős-Rényi方法)来提升GNN在现实应用中的效率。我们在电网的N-1事故评估(一项确保电网可靠性的关键任务)上验证了我们的方法。我们将方法应用于三个不同规模的数据集,探索了具有不同稀疏化程度和重连机制的图卷积网络(GCN)和图同构网络(GIN)。对不同稀疏化水平的比较表明,结合两个研究领域的见解有潜力提升GNN的性能和可扩展性。我们的实验强调了调整稀疏参数的重要性:虽然稀疏性可以改善泛化能力,但过度的稀疏性可能会阻碍复杂模式的学习。我们的自适应重连方法,特别是与早停法结合时,显示出良好前景,它允许模型在训练过程中调整其连接结构。这项研究有助于理解如何有效地在GNN中利用稀疏性来处理诸如电网可靠性分析等关键应用。