The performance of large language models (LLMs) is significantly influenced by the quality of the prompts provided. In response, researchers have developed enormous prompt engineering strategies aimed at modifying the prompt text to enhance task performance. In this paper, we introduce a novel technique termed position engineering, which offers a more efficient way to guide large language models. Unlike prompt engineering, which requires substantial effort to modify the text provided to LLMs, position engineering merely involves altering the positional information in the prompt without modifying the text itself. We have evaluated position engineering in two widely-used LLM scenarios: retrieval-augmented generation (RAG) and in-context learning (ICL). Our findings show that position engineering substantially improves upon the baseline in both cases. Position engineering thus represents a promising new strategy for exploiting the capabilities of large language models.
翻译:大语言模型(LLMs)的性能显著受所提供提示词质量的影响。为此,研究者们开发了大量旨在修改提示文本以提升任务性能的提示工程技术。本文提出了一种称为位置工程的新技术,它提供了一种更高效引导大语言模型的方法。与需要大量精力修改提供给LLMs文本的提示工程不同,位置工程仅涉及改变提示中的位置信息,而无需修改文本本身。我们在两个广泛使用的LLM场景中评估了位置工程:检索增强生成(RAG)和上下文学习(ICL)。我们的研究结果表明,在这两种情况下,位置工程都较基线有显著提升。因此,位置工程代表了一种利用大语言模型能力的有前景的新策略。