Large language models (LLMs) have achieved significant success in reasoning tasks, including mathematical reasoning and logical deduction. Among these reasoning tasks, graph problems stand out due to their complexity and unique structural characteristics, attracting considerable attention from researchers. Previous studies have explored LLMs' graph reasoning abilities through various techniques, such as different encoding methods for graph structures and the use of carefully designed prompts. However, a critical factor has been mostly overlooked: the prompt sequential order in which graph descriptions are presented to the models. In this study, we present the first comprehensive analysis of how the order of graph descriptions impacts LLM performance. Specifically, we comprehensively evaluate four graph description orders across six graph problems using six mainstream LLMs. The results reveal that: (1) ordered graph descriptions significantly improve LLMs' comprehension of graph structures; (2) the robustness of LLMs to graph description order varies across different tasks; and (3) the impact of graph order on performance is closely related to the inherent characteristics of tasks. This study provides a critical advancement in the application of LLMs for solving graph-related problems, paving the way for future research to optimize model performance through strategic graph description ordering.
翻译:大语言模型(LLMs)在推理任务中取得了显著成功,包括数学推理和逻辑演绎。在这些推理任务中,图问题因其复杂性和独特的结构特性而备受关注,吸引了研究者的广泛兴趣。先前的研究通过多种技术探索了LLMs的图推理能力,例如不同的图结构编码方法以及精心设计的提示词。然而,一个关键因素在很大程度上被忽视了:向模型呈现图描述时的提示顺序。在本研究中,我们首次全面分析了图描述顺序如何影响LLM性能。具体而言,我们使用六种主流LLM,在六类图问题上系统评估了四种图描述顺序。结果表明:(1)有序的图描述显著提升了LLMs对图结构的理解;(2)LLMs对图描述顺序的鲁棒性在不同任务间存在差异;(3)图顺序对性能的影响与任务的内在特性密切相关。本研究为LLMs在图相关问题中的应用提供了重要进展,为未来通过策略性图描述顺序优化模型性能的研究铺平了道路。