In-context learning has been extensively validated in large language models. However, the mechanism and selection strategy for in-context example selection, which is a crucial ingredient in this approach, lacks systematic and in-depth research. In this paper, we propose a data compression approach to the selection of in-context examples. We introduce a two-stage method that can effectively choose relevant examples and retain sufficient information about the training dataset within the in-context examples. Our method shows a significant improvement of an average of 5.90% across five different real-world datasets using four language models.
翻译:上下文学习已在大型语言模型中得到广泛验证。然而,作为该方法关键组成部分的上下文示例选择机制与策略,尚缺乏系统深入的研究。本文提出了一种基于数据压缩的上下文示例选择方法。我们引入了一个两阶段方法,能够有效选择相关示例,并在上下文示例中保留训练数据集的充足信息。在四个语言模型上的五个不同真实世界数据集实验中,该方法平均实现了5.90%的显著性能提升。