In-context learning (ICL) is an effective approach to help large language models (LLMs) adapt to various tasks by providing demonstrations of the target task. Considering the high cost of labeling demonstrations, many methods propose synthesizing demonstrations from scratch using LLMs. However, the quality of the demonstrations synthesized from scratch is limited by the capabilities and knowledge of LLMs. To address this, inspired by transfer learning, we propose In-Context Transfer Learning (ICTL), which synthesizes target task demonstrations by transferring labeled demonstrations from similar source tasks. ICTL consists of two steps: source sampling and target transfer. First, we define an optimization objective, which minimizes transfer error to sample source demonstrations similar to the target task. Then, we employ LLMs to transfer the sampled source demonstrations to the target task, matching the definition and format of the target task. Experiments on Super-NI show that ICTL outperforms synthesis from scratch by 2.0% on average, demonstrating the effectiveness of our method.
翻译:情境学习(ICL)是一种通过提供目标任务的演示来帮助大型语言模型(LLM)适应各种任务的有效方法。考虑到标注演示的高昂成本,许多方法提出利用LLM从头合成演示。然而,从头合成的演示质量受限于LLM的能力和知识。为解决此问题,受迁移学习启发,我们提出了情境迁移学习(ICTL),该方法通过迁移来自相似源任务的标注演示来合成目标任务演示。ICTL包含两个步骤:源采样与目标迁移。首先,我们定义了一个优化目标,即最小化迁移误差以采样与目标任务相似的源演示。随后,我们利用LLM将采样的源演示迁移至目标任务,使其匹配目标任务的定义与格式。在Super-NI上的实验表明,ICTL平均优于从头合成方法2.0%,证明了我们方法的有效性。