Recently, the mysterious In-Context Learning (ICL) ability exhibited by Transformer architectures, especially in large language models (LLMs), has sparked significant research interest. However, the resilience of Transformers' in-context learning capabilities in the presence of noisy samples, prevalent in both training corpora and prompt demonstrations, remains underexplored. In this paper, inspired by prior research that studies ICL ability using simple function classes, we take a closer look at this problem by investigating the robustness of Transformers against noisy labels. Specifically, we first conduct a thorough evaluation and analysis of the robustness of Transformers against noisy labels during in-context learning and show that they exhibit notable resilience against diverse types of noise in demonstration labels. Furthermore, we delve deeper into this problem by exploring whether introducing noise into the training set, akin to a form of data augmentation, enhances such robustness during inference, and find that such noise can indeed improve the robustness of ICL. Overall, our fruitful analysis and findings provide a comprehensive understanding of the resilience of Transformer models against label noises during ICL and provide valuable insights into the research on Transformers in natural language processing. Our code is available at https://github.com/InezYu0928/in-context-learning.
翻译:最近,Transformer架构——特别是大型语言模型(LLMs)——展现出的神秘上下文学习(ICL)能力引发了广泛研究兴趣。然而,当训练语料和提示示例中普遍存在含噪样本时,Transformer上下文学习能力的鲁棒性仍鲜有探究。本文受先前利用简单函数类研究ICL能力的启发,通过考察Transformer对含噪标签的鲁棒性来深入剖析该问题。具体而言,我们首先系统评估并分析了Transformer在上下文学习过程中对含噪标签的鲁棒性,结果表明其对演示标签中多种噪声类型均展现出显著的抗干扰能力。进一步地,我们深入探究了在训练集中引入噪声(类似于数据增强)是否能在推理阶段增强这种鲁棒性,发现此类噪声确实能提升ICL的鲁棒性。总体而言,我们丰富的分析与发现为理解Transformer模型在ICL过程中对标签噪声的鲁棒性提供了全面认知,并为自然语言处理领域的Transformer研究提供了宝贵见解。相关代码已开源至 https://github.com/InezYu0928/in-context-learning。