Recent advances in search-augmented large reasoning models (LRMs) enable the retrieval of external knowledge to reduce hallucinations in multistep reasoning. However, their ability to operate on graph-structured data, prevalent in domains such as e-commerce, social networks, and scientific citations, remains underexplored. Unlike plain text corpora, graphs encode rich topological signals that connect related entities and can serve as valuable priors for retrieval, enabling more targeted search and improved reasoning efficiency. Yet, effectively leveraging such structure poses unique challenges, including the difficulty of generating graph-expressive queries and ensuring reliable retrieval that balances structural and semantic relevance. To address this gap, we introduce GraphSearch, the first framework that extends search-augmented reasoning to graph learning, enabling zero-shot graph learning without task-specific fine-tuning. GraphSearch combines a Graph-aware Query Planner, which disentangles search space (e.g., 1-hop, multi-hop, or global neighbors) from semantic queries, with a Graph-aware Retriever, which constructs candidate sets based on topology and ranks them using a hybrid scoring function. We further instantiate two traversal modes: GraphSearch-R, which recursively expands neighborhoods hop by hop, and GraphSearch-F, which flexibly retrieves across local and global neighborhoods without hop constraints. Extensive experiments across diverse benchmarks show that GraphSearch achieves competitive or even superior performance compared to supervised graph learning methods, setting state-of-the-art results in zero-shot node classification and link prediction. These findings position GraphSearch as a flexible and generalizable paradigm for agentic reasoning over graphs.
翻译:近期,搜索增强型大型推理模型(LRMs)通过检索外部知识来减少多步推理中的幻觉问题,取得了显著进展。然而,这些模型在图结构数据(广泛存在于电子商务、社交网络和科学引文等领域)上的应用仍待深入探索。与纯文本语料不同,图数据编码了丰富的拓扑信号,能够连接相关实体,可作为检索的宝贵先验信息,从而实现更具针对性的搜索并提升推理效率。然而,有效利用此类结构面临独特挑战,包括生成具有图表达能力的查询的困难,以及确保检索可靠性时需平衡结构相关性与语义相关性。为填补这一空白,我们提出了GraphSearch——首个将搜索增强推理扩展至图学习的框架,无需任务特定微调即可实现零样本图学习。GraphSearch结合了图感知查询规划器(将搜索空间(如一跳、多跳或全局邻域)与语义查询解耦)和图感知检索器(基于拓扑结构构建候选集,并通过混合评分函数进行排序)。我们进一步实例化了两种遍历模式:GraphSearch-R(逐跳递归扩展邻域)和GraphSearch-F(灵活检索局部与全局邻域,不受跳数限制)。在多种基准测试上的大量实验表明,GraphSearch相比有监督图学习方法取得了竞争性甚至更优的性能,在零样本节点分类和链接预测任务中达到了最先进水平。这些发现确立了GraphSearch作为一种灵活且可泛化的智能图推理范式。