Social media platforms are hubs for multimodal information exchange, encompassing text, images, and videos, making it challenging for machines to comprehend the information or emotions associated with interactions in online spaces. Multimodal Large Language Models (MLLMs) have emerged as a promising solution to these challenges, yet they struggle to accurately interpret human emotions and complex content such as misinformation. This paper introduces MM-Soc, a comprehensive benchmark designed to evaluate MLLMs' understanding of multimodal social media content. MM-Soc compiles prominent multimodal datasets and incorporates a novel large-scale YouTube tagging dataset, targeting a range of tasks from misinformation detection, hate speech detection, and social context generation. Through our exhaustive evaluation on ten size-variants of four open-source MLLMs, we have identified significant performance disparities, highlighting the need for advancements in models' social understanding capabilities. Our analysis reveals that, in a zero-shot setting, various types of MLLMs generally exhibit difficulties in handling social media tasks. However, MLLMs demonstrate performance improvements post fine-tuning, suggesting potential pathways for improvement. Our code and data are available at https://github.com/claws-lab/MMSoc.git.
翻译:社交媒体平台是多模态信息交换的中心,涵盖文本、图像和视频,这使得机器难以理解在线空间互动中的信息或相关情感。多模态大语言模型已成为应对这些挑战的一种有前景的解决方案,但其在准确解读人类情感以及错误信息等复杂内容方面仍存在困难。本文介绍了MM-Soc,一个旨在评估MLLMs对多模态社交媒体内容理解能力的综合基准。MM-Soc整合了多个重要的多模态数据集,并引入了一个新颖的大规模YouTube标签数据集,其任务范围涵盖错误信息检测、仇恨言论检测及社交上下文生成。通过对四种开源MLLMs的十个不同规模变体进行详尽评估,我们发现了显著的性能差异,凸显了提升模型社会理解能力的迫切需求。我们的分析表明,在零样本设置下,各类MLLMs普遍难以有效处理社交媒体任务。然而,经过微调后,MLLMs表现出性能提升,这指明了潜在的改进路径。我们的代码与数据公开于 https://github.com/claws-lab/MMSoc.git。