Large Language Models (LLMs) have demonstrated exceptional abilities in comprehending and generating text, motivating numerous researchers to utilize them for Information Extraction (IE) purposes, including Relation Extraction (RE). Nonetheless, most existing methods are predominantly designed for Sentence-level Relation Extraction (SentRE) tasks, which typically encompass a restricted set of relations and triplet facts within a single sentence. Furthermore, certain approaches resort to treating relations as candidate choices integrated into prompt templates, leading to inefficient processing and suboptimal performance when tackling Document-Level Relation Extraction (DocRE) tasks, which entail handling multiple relations and triplet facts distributed across a given document, posing distinct challenges. To overcome these limitations, we introduce AutoRE, an end-to-end DocRE model that adopts a novel RE extraction paradigm named RHF (Relation-Head-Facts). Unlike existing approaches, AutoRE does not rely on the assumption of known relation options, making it more reflective of real-world scenarios. Additionally, we have developed an easily extensible RE framework using a Parameters Efficient Fine Tuning (PEFT) algorithm (QLoRA). Our experiments on the RE-DocRED dataset showcase AutoRE's best performance, achieving state-of-the-art results, surpassing TAG by 10.03% and 9.03% respectively on the dev and test set.
翻译:大语言模型(LLMs)在文本理解与生成方面展现出卓越能力,促使众多研究者将其应用于包括关系抽取(RE)在内的信息抽取(IE)任务。然而,现有方法大多面向句子级关系抽取(SentRE)任务设计,通常仅涉及单句中的有限关系及三元组事实。此外,部分方法将关系作为候选选项嵌入提示模板,导致在处理文档级关系抽取(DocRE)任务时效率低下且性能欠佳——该类任务需处理分布于文档中的多种关系及三元组事实,面临独特挑战。为突破这些局限,我们提出AutoRE,一种端到端DocRE模型,其采用名为RHF(关系-主语-事实)的新型关系抽取范式。与现有方法不同,AutoRE不依赖已知关系选项的假设,更贴合真实场景。同时,我们利用参数高效微调(PEFT)算法(QLoRA)构建了易于扩展的关系抽取框架。在RE-DocRED数据集上的实验表明,AutoRE取得最优性能,在开发集和测试集上分别超越TAG方法10.03%和9.03%,达到当前最佳水平。