Few-Shot Relation Extraction (FSRE), a subtask of Relation Extraction (RE) that utilizes limited training instances, appeals to more researchers in Natural Language Processing (NLP) due to its capability to extract textual information in extremely low-resource scenarios. The primary methodologies employed for FSRE have been fine-tuning or prompt tuning techniques based on Pre-trained Language Models (PLMs). Recently, the emergence of Large Language Models (LLMs) has prompted numerous researchers to explore FSRE through In-Context Learning (ICL). However, there are substantial limitations associated with methods based on either traditional RE models or LLMs. Traditional RE models are hampered by a lack of necessary prior knowledge, while LLMs fall short in their task-specific capabilities for RE. To address these shortcomings, we propose a Dual-System Augmented Relation Extractor (DSARE), which synergistically combines traditional RE models with LLMs. Specifically, DSARE innovatively injects the prior knowledge of LLMs into traditional RE models, and conversely enhances LLMs' task-specific aptitude for RE through relation extraction augmentation. Moreover, an Integrated Prediction module is employed to jointly consider these two respective predictions and derive the final results. Extensive experiments demonstrate the efficacy of our proposed method.
翻译:小样本关系抽取(FSRE)作为关系抽取(RE)的子任务,利用有限训练实例在极低资源场景下提取文本信息的能力,吸引了自然语言处理(NLP)领域越来越多研究者的关注。FSRE主要采用基于预训练语言模型(PLMs)的微调或提示调优技术。近年来,大型语言模型(LLMs)的出现促使众多研究者通过上下文学习(ICL)探索FSRE。然而,基于传统RE模型或LLMs的方法均存在显著局限:传统RE模型受限于必要先验知识的缺乏,而LLMs在RE任务专用能力方面存在不足。为克服这些缺陷,我们提出一种双系统增强关系抽取器(DSARE),其协同整合了传统RE模型与LLMs。具体而言,DSARE创新性地将LLMs的先验知识注入传统RE模型,并通过关系抽取增强反向提升LLMs的RE任务专用能力。此外,该方法采用集成预测模块,综合考量两个系统的独立预测结果以生成最终判定。大量实验验证了所提方法的有效性。