Graph Neural Networks (GNNs) have become fundamental in semi-supervised learning for graph representation, leveraging their ability to capture complex node relationships. A recent trend in GNN research focuses on adaptive multi-hop structure learning, moving beyond fixed-hop aggregation to more flexible and dynamic neighborhood selection. While GAMLP \citep{Zhang_2022} employs separate MLP layers for each multi-hop domain and ImprovingTE \citep{Yao2023ImprovingTE} enhances this by injecting contextualized substructure information, these methods still rely heavily on predefined sampling strategies, which may limit their ability to generalize and maintain stable accuracy. To address these limitations, we propose an \textbf{adaptive reconstruction framework} that dynamically refines multi-hop structure learning. Inspired by "coreset selection" \citep{guo2022deepcore}, our approach adaptively \textbf{reconstructs} node neighborhoods to optimize message passing, ensuring more \textbf{effective and context-aware information flow} across the graph. To further enhance structural robustness, we introduce two key modules: the \textbf{Distance Recomputator} and the \textbf{Topology Reconstructor} (\textcolor{blue}{DRTR}). The Distance Recomputator \textbf{reassesses and recalibrates} node distances based on adaptive graph properties, leading to \textbf{improved node embeddings} that better reflect latent relationships. Meanwhile, the Topology Reconstructor \textbf{dynamically refines local graph structures}, enabling the model to \textbf{adapt to evolving graph topologies} and mitigate the impact of noise and mislabeled data. Empirical evaluations demonstrate that our \textbf{adaptive reconstruction framework} achieves \textbf{significant improvements} over existing multi-hop-based models, providing more \textbf{stable and accurate} performance in various graph learning benchmarks.
翻译:图神经网络(GNNs)凭借其捕捉复杂节点关系的能力,已成为图表示半监督学习的基础。GNN研究的一个新趋势聚焦于自适应多跳结构学习,超越了固定跳数聚合,转向更灵活、动态的邻域选择。虽然GAMLP \citep{Zhang_2022}为每个多跳域使用独立的MLP层,而ImprovingTE \citep{Yao2023ImprovingTE}通过注入上下文子结构信息对此进行了增强,但这些方法仍严重依赖预定义的采样策略,这可能限制其泛化能力和保持稳定准确性的能力。为解决这些局限性,我们提出了一个\textbf{自适应重构框架},以动态优化多跳结构学习。受“核心集选择” \citep{guo2022deepcore}的启发,我们的方法自适应地\textbf{重构}节点邻域以优化消息传递,确保图中更\textbf{有效且上下文感知的信息流}。为进一步增强结构鲁棒性,我们引入了两个关键模块:\textbf{距离重计算器}和\textbf{拓扑重构器}(\textcolor{blue}{DRTR})。距离重计算器基于自适应图属性\textbf{重新评估和校准}节点距离,从而产生\textbf{改进的节点嵌入},更好地反映潜在关系。同时,拓扑重构器\textbf{动态细化局部图结构},使模型能够\textbf{适应演化的图拓扑},并减轻噪声和误标数据的影响。实证评估表明,我们的\textbf{自适应重构框架}相比现有的基于多跳的模型取得了\textbf{显著改进},在各种图学习基准测试中提供了更\textbf{稳定和准确}的性能。