Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored. To bridge this gap, we introduce GraphInstruct, a novel and comprehensive instruction-tuning dataset designed to equip language models with the ability to tackle a broad spectrum of graph problems using explicit reasoning paths. Utilizing GraphInstruct, we build GraphWiz, an open-source language model capable of resolving various graph problem types while generating clear reasoning processes. To enhance the model's capability and reliability, we incorporate the Direct Preference Optimization (DPO) framework into the graph problem-solving context. The enhanced model, GraphWiz-DPO, achieves an average accuracy of 65% across nine tasks with different complexity levels, surpassing GPT-4 which has an average accuracy of 43.8%. Moreover, our research delves into the delicate balance between training data volume and model performance, highlighting the potential for overfitting with increased data. We also explore the transferability of the model's reasoning ability across different graph tasks, indicating the model's adaptability and practical application potential. Our investigation offers a new blueprint and valuable insights for developing LLMs specialized in graph reasoning and problem-solving.
翻译:大型语言模型(LLMs)在多个领域取得了令人瞩目的成功,但其在理解和解决复杂图问题方面的能力尚未得到充分探索。为弥补这一差距,我们引入了GraphInstruct——一个新颖且全面的指令微调数据集,旨在通过显式推理路径赋予语言模型解决广泛图问题的能力。基于GraphInstruct,我们构建了GraphWiz,这是一个能够解决各类图问题并生成清晰推理过程的开源语言模型。为提升模型的能力与可靠性,我们将直接偏好优化(DPO)框架引入图问题求解场景。增强后的模型GraphWiz-DPO在九个不同复杂度的任务上平均准确率达到65%,超越了平均准确率为43.8%的GPT-4。此外,本研究深入探讨了训练数据量与模型性能间的微妙平衡,揭示了数据增加可能导致过拟合的潜在风险。我们还探究了模型推理能力在不同图任务间的可迁移性,表明了模型的适应性和实际应用潜力。本研究为开发专注于图推理与问题求解的LLMs提供了新的蓝图和宝贵见解。