Recent research in zero-shot Relation Extraction (RE) has focused on using Large Language Models (LLMs) due to their impressive zero-shot capabilities. However, current methods often perform suboptimally, mainly due to a lack of detailed, context-specific prompts needed for understanding various sentences and relations. To address this, we introduce the Self-Prompting framework, a novel method designed to fully harness the embedded RE knowledge within LLMs. Specifically, our framework employs a three-stage diversity approach to prompt LLMs, generating multiple synthetic samples that encapsulate specific relations from scratch. These generated samples act as in-context learning samples, offering explicit and context-specific guidance to efficiently prompt LLMs for RE. Experimental evaluations on benchmark datasets show our approach outperforms existing LLM-based zero-shot RE methods. Additionally, our experiments confirm the effectiveness of our generation pipeline in producing high-quality synthetic data that enhances performance.
翻译:近期零样本关系抽取研究聚焦于利用大型语言模型卓越的零样本能力。然而,现有方法常因缺乏理解多样化句子与关系所需的细粒度上下文相关提示而表现欠佳。为此,我们提出自提示框架——一种旨在充分挖掘LLMs内蕴关系抽取知识的新方法。该框架通过三阶段多样化策略提示LLMs,从零生成涵盖特定关系的多组合成样本。这些生成样本作为上下文学习示例,为LLMs执行关系抽取任务提供显式且情境化的指导。在基准数据集上的实验表明,本方法优于现有基于LLM的零样本关系抽取方案。附加实验证实了本生成流程在产出高质量合成数据以提升模型性能方面的有效性。