While in-context Learning (ICL) has proven to be an effective technique to improve the performance of Large Language Models (LLMs) in a variety of complex tasks, notably in translating natural language questions into Structured Query Language (NL2SQL), the question of how to select the most beneficial demonstration examples remains an open research problem. While prior works often adapted off-the-shelf encoders to retrieve examples dynamically, an inherent discrepancy exists in the representational capacities between the external retrievers and the LLMs. Further, optimizing the selection of examples is a non-trivial task, since there are no straightforward methods to assess the relative benefits of examples without performing pairwise inference. To address these shortcomings, we propose DeTriever, a novel demonstration retrieval framework that learns a weighted combination of LLM hidden states, where rich semantic information is encoded. To train the model, we propose a proxy score that estimates the relative benefits of examples based on the similarities between output queries. Experiments on two popular NL2SQL benchmarks demonstrate that our method significantly outperforms the state-of-the-art baselines on one-shot NL2SQL tasks.
翻译:尽管上下文学习(ICL)已被证明是提升大型语言模型(LLMs)在多种复杂任务中性能的有效技术,尤其是在将自然语言问题转换为结构化查询语言(NL2SQL)方面,但如何选择最有益的演示示例仍然是一个开放的研究问题。先前的研究通常采用现成的编码器来动态检索示例,但外部检索器与LLMs之间在表示能力上存在固有差异。此外,优化示例选择并非易事,因为若不进行成对推理,目前尚无直接方法评估示例的相对效益。为应对这些不足,我们提出DeTriever,一种新颖的演示检索框架,该框架学习LLM隐藏状态的加权组合,其中编码了丰富的语义信息。为训练该模型,我们提出一种代理评分,该评分基于输出查询之间的相似性来估计示例的相对效益。在两个主流NL2SQL基准上的实验表明,我们的方法在单样本NL2SQL任务上显著优于现有最先进的基线模型。