Graph computational tasks are inherently challenging and often demand the development of advanced algorithms for effective solutions. With the emergence of large language models (LLMs), researchers have begun investigating their potential to address these tasks. However, existing approaches are constrained by LLMs' limited capability to comprehend complex graph structures and their high inference costs, rendering them impractical for handling large-scale graphs. Inspired by human approaches to graph problems, we introduce a novel framework, PIE (Pseudocode-Injection-Enhanced LLM Reasoning for Graph Computational Tasks), which consists of three key steps: problem understanding, prompt design, and code generation. In this framework, LLMs are tasked with understanding the problem and extracting relevant information to generate correct code. The responsibility for analyzing the graph structure and executing the code is delegated to the interpreter. We inject task-related pseudocodes into the prompts to further assist the LLMs in generating efficient code. We also employ cost-effective trial-and-error techniques to ensure that the LLM-generated code executes correctly. Unlike other methods that require invoking LLMs for each individual test case, PIE only calls the LLM during the code generation phase, allowing the generated code to be reused and significantly reducing inference costs. Extensive experiments demonstrate that PIE outperforms existing baselines in terms of both accuracy and computational efficiency.
翻译:图计算任务本质上具有挑战性,通常需要开发高级算法才能获得有效解。随着大语言模型(LLMs)的出现,研究人员开始探索其解决此类任务的潜力。然而,现有方法受限于LLMs理解复杂图结构的能力不足及其高昂的推理成本,难以应用于大规模图处理。受人类解决图问题方法的启发,我们提出了一种新颖的框架PIE(面向图计算任务的伪代码注入增强型LLM推理),该框架包含三个关键步骤:问题理解、提示设计和代码生成。在此框架中,LLMs负责理解问题并提取相关信息以生成正确代码,而分析图结构和执行代码的职责则交由解释器承担。我们在提示中注入与任务相关的伪代码,以进一步辅助LLMs生成高效代码。同时,我们采用经济高效的试错技术来确保LLM生成的代码能够正确执行。与其他需要为每个测试用例单独调用LLM的方法不同,PIE仅在代码生成阶段调用LLM,使得生成的代码可被重复使用,从而显著降低了推理成本。大量实验表明,PIE在准确性和计算效率方面均优于现有基线方法。