In the midst of widespread misinformation and disinformation through social media and the proliferation of AI-generated texts, it has become increasingly difficult for people to validate and trust information they encounter. Many fact-checking approaches and tools have been developed, but they often lack appropriate explainability or granularity to be useful in various contexts. A text validation method that is easy to use, accessible, and can perform fine-grained evidence attribution has become crucial. More importantly, building user trust in such a method requires presenting the rationale behind each prediction, as research shows this significantly influences people's belief in automated systems. Localizing and bringing users' attention to the specific problematic content is also paramount, instead of providing simple blanket labels. In this paper, we present ClaimVer, a human-centric framework tailored to meet users' informational and verification needs by generating rich annotations and thereby reducing cognitive load. Designed to deliver comprehensive evaluations of texts, it highlights each claim, verifies it against a trusted knowledge graph (KG), presents the evidence, and provides succinct, clear explanations for each claim prediction. Finally, our framework introduces an attribution score, enhancing applicability across a wide range of downstream tasks.
翻译:在社交媒体虚假信息泛滥和人工智能生成文本激增的背景下,人们越来越难以验证和信任所接触的信息。尽管已有许多事实核查方法和工具被开发出来,但它们往往缺乏适当的可解释性或细粒度,难以适用于多样化的应用场景。一种易于使用、可访问且能执行细粒度证据归因的文本验证方法变得至关重要。更重要的是,建立用户对此类方法的信任需要展示每个预测背后的逻辑依据,因为研究表明这显著影响人们对自动化系统的信任程度。与提供简单的笼统标签相比,定位并引导用户关注具体的问题内容同样至关重要。本文提出ClaimVer——一个以人为中心的框架,通过生成丰富的标注信息来满足用户的信息与验证需求,从而降低认知负荷。该框架旨在对文本进行全面评估,其工作流程包括:识别每个声明、依据可信知识图谱(KG)进行验证、呈现证据并为每个声明预测提供简洁清晰的解释。最后,本框架引入了归因评分机制,显著增强了其在广泛下游任务中的适用性。