Despite the rapid progress that existing automated feedback methods have made in correcting the output of large language models (LLMs), these methods cannot be well applied to the relation extraction (RE) task due to their designated feedback objectives and correction manner. To address this problem, we propose a novel automated feedback framework for RE, which presents a rationale supervisor to verify the rationale and provides re-selected demonstrations as feedback to correct the initial prediction. Specifically, we first design a causal intervention and observation method to collect biased/unbiased rationales for contrastive training the rationale supervisor. Then, we present a verification-feedback-correction procedure to iteratively enhance LLMs' capability of handling the RE task. Extensive experiments prove that our proposed framework significantly outperforms existing methods.
翻译:尽管现有的自动反馈方法在修正大型语言模型(LLM)输出方面取得了快速进展,但由于其预设的反馈目标与修正方式,这些方法难以有效应用于关系抽取(RE)任务。为解决此问题,我们提出了一种新颖的面向关系抽取的自动反馈框架,该框架引入依据监督器以验证推理依据,并提供重新筛选的示例作为反馈来修正初始预测。具体而言,我们首先设计了一种因果干预与观测方法,用于收集有偏/无偏推理依据,以对比训练依据监督器。随后,我们提出了一套验证-反馈-修正流程,以迭代式增强大型语言模型处理关系抽取任务的能力。大量实验证明,我们所提出的框架显著优于现有方法。