Facial expression recognition (FER) models are widely used in video-based affective computing applications, such as human-computer interaction and healthcare monitoring. However, deep FER models often struggle with subtle expressions and high inter-subject variability, limiting performance in real-world settings. Source-free domain adaptation (SFDA) has been proposed to personalize a pretrained source model using only unlabeled target data, avoiding privacy, storage, and transmission constraints. We address a particularly challenging setting where source data is unavailable and the target data contains only neutral expressions. Existing SFDA methods are not designed for adaptation from a single target class, while generating non-neutral facial images is often unstable and expensive. To address this, we propose Source-Free Domain Adaptation with Personalized Feature Translation (SFDA-PFT), a lightweight latent-space approach. A translator is first pretrained on source data to map subject-specific style features between subjects while preserving expression information through expression-consistency and style-aware objectives. It is then adapted to neutral target data without source data or image synthesis. By operating in the latent space, SFDA-PFT avoids noisy facial image generation, reduces computation, and learns discriminative embeddings for classification. Experiments on BioVid, StressID, BAH, and Aff-Wild2 show that SFDA-PFT consistently outperforms state-of-the-art SFDA methods in privacy-sensitive FER scenarios. Our code is publicly available at: \href{https://github.com/MasoumehSharafi/SFDA-PFT}{GitHub}.
翻译:面部表情识别模型广泛应用于基于视频的情感计算应用,如人机交互与健康监测。然而,深度表情识别模型常难以处理细微表情及较高的个体间差异,限制了其在真实场景中的性能。无源域自适应方法被提出用于仅利用未标注目标数据对预训练的源模型进行个性化调整,从而避免隐私、存储及传输限制。我们针对一个极具挑战性的场景:源数据不可用且目标数据仅包含中性表情。现有无源域自适应方法未针对从单一目标类别进行自适应而设计,而生成非中性面部图像通常不稳定且成本高昂。为此,我们提出基于个性化特征迁移的无源域自适应方法,这是一种轻量级的隐空间方法。首先在源数据上预训练一个迁移器,通过表情一致性与风格感知目标,在保持表情信息的同时实现个体间特定风格特征的映射。随后,该方法在无需源数据或图像合成的情况下,仅利用中性目标数据进行自适应。通过在隐空间中操作,该方法避免了噪声面部图像的生成,减少了计算量,并学习到具有判别性的分类嵌入。在BioVid、StressID、BAH和Aff-Wild2数据集上的实验表明,在隐私敏感的表情识别场景中,该方法持续优于当前最先进的无源域自适应方法。我们的代码已公开于:\href{https://github.com/MasoumehSharafi/SFDA-PFT}{GitHub}。