This paper presents several strategies to automatically obtain additional examples for in-context learning of one-shot relation extraction. Specifically, we introduce a novel strategy for example selection, in which new examples are selected based on the similarity of their underlying syntactic-semantic structure to the provided one-shot example. We show that this method results in complementary word choices and sentence structures when compared to LLM-generated examples. When these strategies are combined, the resulting hybrid system achieves a more holistic picture of the relations of interest than either method alone. Our framework transfers well across datasets (FS-TACRED and FS-FewRel) and LLM families (Qwen and Gemma). Overall, our hybrid selection method consistently outperforms alternative strategies and achieves state-of-the-art performance on FS-TACRED and strong gains on a customized FewRel subset.
翻译:本文提出了多种策略,用于自动获取额外示例以支持单样本关系抽取的上下文学习。具体而言,我们引入了一种新颖的示例选择策略,该策略依据候选示例与给定单样本示例在底层句法-语义结构上的相似性进行筛选。实验表明,与大型语言模型生成的示例相比,该方法能产生更具互补性的词汇选择和句子结构。当这些策略结合使用时,所构建的混合系统能够比单一方法更全面地呈现目标关系的特征。我们的框架在不同数据集(FS-TACRED与FS-FewRel)和不同大型语言模型系列(Qwen与Gemma)间均展现出良好的迁移能力。总体而言,我们的混合选择方法在各项对比策略中表现稳健,在FS-TACRED数据集上达到了最先进的性能,并在定制化的FewRel子集上取得了显著提升。