Current facial expression recognition (FER) models are often designed in a supervised learning manner and thus are constrained by the lack of large-scale facial expression images with high-quality annotations. Consequently, these models often fail to generalize well, performing poorly on unseen images in inference. Vision-language-based zero-shot models demonstrate a promising potential for addressing such challenges. However, these models lack task-specific knowledge and therefore are not optimized for the nuances of recognizing facial expressions. To bridge this gap, this work proposes a novel method, Exp-CLIP, to enhance zero-shot FER by transferring the task knowledge from large language models (LLMs). Specifically, based on the pre-trained vision-language encoders, we incorporate a projection head designed to map the initial joint vision-language space into a space that captures representations of facial actions. To train this projection head for subsequent zero-shot predictions, we propose to align the projected visual representations with task-specific semantic meanings derived from the LLM encoder, and the text instruction-based strategy is employed to customize the LLM knowledge. Given unlabelled facial data and efficient training of the projection head, Exp-CLIP achieves superior zero-shot results to the CLIP models and several other large vision-language models (LVLMs) on seven in-the-wild FER datasets. The code and pre-trained models are available at https://github.com/zengqunzhao/Exp-CLIP.
翻译:当前的面部表情识别模型通常以监督学习方式设计,因此受限于缺乏大规模高质量标注的面部表情图像。这些模型往往泛化能力不足,在推理时对未见图像表现不佳。基于视觉-语言的零样本模型展现出解决此类挑战的潜力,然而这些模型缺乏任务特定知识,未能针对面部表情识别的细微差异进行优化。为弥补这一差距,本研究提出一种新方法Exp-CLIP,通过迁移大语言模型的任务知识来增强零样本面部表情识别。具体而言,在预训练的视觉-语言编码器基础上,我们引入一个投影头,其设计目标是将初始的联合视觉-语言空间映射到能够捕捉面部动作表征的空间。为训练该投影头以进行后续零样本预测,我们提出将投影后的视觉表征与源自LLM编码器的任务特定语义含义进行对齐,并采用基于文本指令的策略来定制LLM知识。在未标注面部数据及高效训练投影头的条件下,Exp-CLIP在七个真实场景面部表情数据集上取得了优于CLIP模型及其他多个大型视觉-语言模型的零样本性能。代码与预训练模型已发布于https://github.com/zengqunzhao/Exp-CLIP。