Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, typically found in a high-stakes environment. In recent years, substantial advancements have been made in the areas of ME recognition, spotting, and generation. However, conventional approaches that treat spotting and recognition as separate tasks are suboptimal, particularly for analyzing long-duration videos in realistic settings. Concurrently, the emergence of multimodal large language models (MLLMs) and large vision-language models (LVLMs) offers promising new avenues for enhancing ME analysis through their powerful multimodal reasoning capabilities. The ME grand challenge (MEGC) 2025 introduces two tasks that reflect these evolving research directions: (1) ME spot-then-recognize (ME-STR), which integrates ME spotting and subsequent recognition in a unified sequential pipeline; and (2) ME visual question answering (ME-VQA), which explores ME understanding through visual question answering, leveraging MLLMs or LVLMs to address diverse question types related to MEs. All participating algorithms are required to run on this test set and submit their results on a leaderboard. More details are available at https://megc2025.github.io.
翻译:面部微表情是一种不由自主的面部运动,当一个人体验到某种情绪但试图抑制或压抑其面部表情时,会自发产生,通常出现在高风险环境中。近年来,在微表情识别、定位和生成领域已取得显著进展。然而,将定位与识别视为独立任务的传统方法并非最优,特别是在现实环境中分析长时视频时。与此同时,多模态大语言模型和大视觉语言模型的出现,凭借其强大的多模态推理能力,为增强微表情分析提供了有前景的新途径。2025年微表情大挑战引入了两项反映这些演进研究方向的任务:(1) 微表情先定位后识别,该任务将微表情定位与后续识别整合在一个统一的序列流程中;(2) 微表情视觉问答,该任务通过视觉问答探索对微表情的理解,利用多模态大语言模型或大视觉语言模型处理与微表情相关的多样化问题类型。所有参与算法均需在此测试集上运行,并在排行榜上提交结果。更多详情请访问 https://megc2025.github.io。