In fact-checking, structure and phrasing of claims critically influence a model's ability to predict verdicts accurately. Social media content in particular rarely serves as optimal input for verification systems, which necessitates pre-processing to extract the claim from noisy context before fact checking. Prior work suggests extracting a claim representation that humans find to be checkworthy and verifiable. This has two limitations: (1) the format may not be optimal for a fact-checking model, and (2), it requires annotated data to learn the extraction task from. We address both issues and propose a method to extract claims that is not reliant on labeled training data. Instead, our self-adaptive approach only requires a black-box fact checking model and a generative language model (LM). Given a tweet, we iteratively optimize the LM to generate a claim paraphrase that increases the performance of a fact checking model. By learning from preference pairs, we align the LM to the fact checker using direct preference optimization. We show that this novel setup extracts a claim paraphrase that is more verifiable than their original social media formulations, and is on par with competitive baselines. For refuted claims, our method consistently outperforms all baselines.
翻译:在事实核查中,声明的结构与措辞对模型准确预测核查结果具有关键影响。社交媒体内容尤其难以作为核查系统的最佳输入,因此在事实核查前需要通过预处理从噪声语境中提取声明。先前研究建议提取人类认为值得核查且可验证的声明表征,但该方法存在两个局限:(1) 其格式可能并非事实核查模型的最优输入;(2) 需要标注数据来学习提取任务。本文针对这两个问题,提出一种不依赖标注训练数据的声明提取方法。我们的自适应性方法仅需黑盒事实核查模型与生成式语言模型(LM)。给定推文,我们通过迭代优化语言模型来生成能提升事实核查模型性能的声明转述。通过从偏好对中学习,我们采用直接偏好优化方法将语言模型与事实核查器对齐。实验表明,这种新颖框架提取的声明转述比原始社交媒体表述具有更高的可验证性,其性能与竞争基线方法相当。对于被证伪的声明,我们的方法在所有基线比较中均取得稳定优势。