Large language models (LLMs) have exhibited remarkable few-shot learning capabilities and unified the paradigm of NLP tasks through the in-context learning (ICL) technique. Despite the success of ICL, the quality of the exemplar demonstrations can significantly influence the LLM's performance. Existing exemplar selection methods mainly focus on the semantic similarity between queries and candidate exemplars. On the other hand, the logical connections between reasoning steps can be beneficial to depict the problem-solving process as well. In this paper, we proposes a novel method named Reasoning Graph-enhanced Exemplar Retrieval (RGER). RGER first quires LLM to generate an initial response, then expresses intermediate problem-solving steps to a graph structure. After that, it employs graph kernel to select exemplars with semantic and structural similarity. Extensive experiments demonstrate the structural relationship is helpful to the alignment of queries and candidate exemplars. The efficacy of RGER on math and logit reasoning tasks showcases its superiority over state-of-the-art retrieval-based approaches. Our code is released at https://github.com/Yukang-Lin/RGER.
翻译:大语言模型(LLMs)通过上下文学习(ICL)技术展现出卓越的少样本学习能力,并统一了自然语言处理任务的范式。尽管ICL取得了成功,但示例演示的质量会显著影响LLM的性能。现有的示例选择方法主要关注查询与候选示例之间的语义相似性。另一方面,推理步骤间的逻辑连接也有助于描述问题解决过程。本文提出了一种名为推理图增强示例检索(RGER)的新方法。RGER首先要求LLM生成初始响应,然后将中间问题解决步骤表示为图结构。随后,它利用图核选择具有语义和结构相似性的示例。大量实验表明,结构关系有助于查询与候选示例的对齐。RGER在数学和逻辑推理任务上的有效性证明了其优于当前最先进的基于检索的方法。我们的代码发布于 https://github.com/Yukang-Lin/RGER。