Large language models(LLMs) have exhibited remarkable few-shot learning capabilities and unified the paradigm of NLP tasks through the in-context learning(ICL) technique. Despite the success of ICL, the quality of the exemplar demonstrations can significantly influence the LLM's performance. Existing exemplar selection methods mainly focus on the semantic similarity between queries and candidate exemplars. On the other hand, the logical connections between reasoning steps can be beneficial to depict the problem-solving process as well. In this paper, we proposes a novel method named Reasoning Graph-enhanced Exemplar Retrieval(RGER). RGER first quires LLM to generate an initial response, then expresses intermediate problem-solving steps to a graph structure. After that, it employs graph kernel to select exemplars with semantic and structural similarity. Extensive experiments demonstrate the structural relationship is helpful to the alignment of queries and candidate exemplars. The efficacy of RGER on math and logit reasoning tasks showcases its superiority over state-of-the-art retrieval-based approaches. Our code is released at https://github.com/Yukang-Lin/RGER.
翻译:大型语言模型(LLM)通过上下文学习(ICL)技术展现出卓越的少样本学习能力,并统一了NLP任务的范式。尽管ICL取得了成功,但示例演示的质量会显著影响LLM的性能。现有的示例选择方法主要关注查询与候选示例之间的语义相似性。另一方面,推理步骤间的逻辑关联也有助于描述问题解决过程。本文提出了一种名为推理图增强示例检索(RGER)的新方法。RGER首先要求LLM生成初始响应,随后将中间问题解决步骤表达为图结构。接着,该方法利用图核选择具有语义和结构相似性的示例。大量实验证明,结构关系有助于提升查询与候选示例的对齐效果。RGER在数学与逻辑推理任务上的有效性,展示了其优于当前最先进的基于检索方法的性能。我们的代码发布于https://github.com/Yukang-Lin/RGER。