Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines. While advances have been made in natural language processing, real-world humor often thrives in a multi-modal context, encapsulated distinctively by memes. This paper poses a particular emphasis on the impact of multi-images on meme captioning. After that, we introduce the \textsc{XMeCap} framework, a novel approach that adopts supervised fine-tuning and reinforcement learning based on an innovative reward model, which factors in both global and local similarities between visuals and text. Our results, benchmarked against contemporary models, manifest a marked improvement in caption generation for both single-image and multi-image memes, as well as different meme categories. \textsc{XMeCap} achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71\% and 4.82\%, respectively. This research not only establishes a new frontier in meme-related studies but also underscores the potential of machines in understanding and generating humor in a multi-modal setting.
翻译:幽默深深植根于社会意义与文化细节之中,对机器而言构成了一项独特的挑战。尽管自然语言处理领域已取得进展,但现实世界中的幽默往往在多模态语境中蓬勃发展,并以表情包的形式得到独特封装。本文特别强调了多图像对表情包字幕生成的影响。随后,我们提出了\textsc{XMeCap}框架,这是一种新颖的方法,采用基于创新奖励模型的监督微调与强化学习,该模型综合考虑了视觉内容与文本之间的全局与局部相似性。我们的实验结果以当代模型为基准进行对比,表明该框架在单图像与多图像表情包以及不同表情包类别的字幕生成方面均取得了显著提升。\textsc{XMeCap}在单图像表情包上实现了75.85的平均评估分数,在多图像表情包上实现了66.32的平均评估分数,分别超出最佳基线模型3.71\%与4.82\%。这项研究不仅为表情包相关研究开辟了新前沿,也凸显了机器在多模态环境中理解与生成幽默的潜力。