The rapid advancement of social media platforms has significantly reduced the cost of information dissemination, yet it has also led to a proliferation of fake news, posing a threat to societal trust and credibility. Most of fake news detection research focused on integrating text and image information to represent the consistency of multiple modes in news content, while paying less attention to inconsistent information. Besides, existing methods that leveraged inconsistent information often caused one mode overshadowing another, leading to ineffective use of inconsistent clue. To address these issues, we propose an adaptive multi-modal feature fusion network (MFF-Net). Inspired by human judgment processes for determining truth and falsity in news, MFF-Net focuses on inconsistent parts when news content is generally consistent and consistent parts when it is generally inconsistent. Specifically, MFF-Net extracts semantic and global features from images and texts respectively, and learns consistency information between modes through a multiple feature fusion module. To deal with the problem of modal information being easily masked, we design a single modal feature filtering strategy to capture inconsistent information from corresponding modes separately. Finally, similarity scores are calculated based on global features with adaptive adjustments made to achieve weighted fusion of consistent and inconsistent features. Extensive experimental results demonstrate that MFF-Net outperforms state-of-the-art methods across three public news datasets derived from real social medias.
翻译:社交媒体的快速发展显著降低了信息传播成本,但也导致虚假新闻泛滥,对社会信任与公信力构成威胁。现有虚假新闻检测研究大多聚焦于整合文本与图像信息以表征新闻内容中多模态的一致性,却较少关注不一致信息。此外,现有利用不一致信息的方法常导致某一模态遮蔽另一模态,致使不一致线索未能有效利用。为解决这些问题,我们提出一种自适应多模态特征融合网络(MFF-Net)。受人类判断新闻真伪过程的启发,MFF-Net在新闻内容整体一致时关注不一致部分,在整体不一致时关注一致部分。具体而言,MFF-Net分别从图像和文本中提取语义特征与全局特征,并通过多特征融合模块学习模态间的一致性信息。针对模态信息易被遮蔽的问题,我们设计单模态特征过滤策略,分别从相应模态中捕获不一致信息。最后基于全局特征计算相似度得分,并自适应调整以实现一致性与不一致性特征的加权融合。在三个源自真实社交媒体的公开新闻数据集上的大量实验结果表明,MFF-Net的性能优于现有最先进方法。