Learning multi-task models for jointly detecting stance and verifying rumors poses challenges due to the need for training data of stance at post level and rumor veracity at claim level, which are difficult to obtain. To address this issue, we leverage large language models (LLMs) as the foundation annotators for the joint stance detection (SD) and rumor verification (RV) tasks, dubbed as JSDRV. We introduce a novel reinforcement tuning framework to enhance the joint predictive capabilities of LLM-based SD and RV components. Specifically, we devise a policy for selecting LLM-annotated data at the two levels, employing a hybrid reward mechanism to choose high-quality labels for effective LLM fine-tuning on both tasks. Results demonstrate that JSDRV improves the capabilities of LLMs in the joint tasks, not only outperforming state-of-the-art methods but also generalizing to non-LLMs accommodated as task models.
翻译:联合检测立场与验证谣言的多任务模型学习面临挑战,这主要源于需要同时获取帖子层面的立场标注数据和主张层面的谣言真实性标注数据,而此类数据难以获得。为解决这一问题,我们利用大语言模型作为联合立场检测与谣言验证任务的基础标注器,该框架称为JSDRV。我们提出了一种新颖的强化调优框架,以增强基于大语言模型的立场检测与谣言验证组件的联合预测能力。具体而言,我们设计了一种在帖子与主张两个层级上筛选大语言模型标注数据的策略,采用混合奖励机制选择高质量标注,从而实现对两项任务的高效大语言模型微调。实验结果表明,JSDRV显著提升了大语言模型在联合任务中的能力,不仅超越了现有最优方法,还能泛化至作为任务模型使用的非大语言模型架构。