Despite the considerable advancements achieved by deep neural networks, their performance tends to degenerate when the test environment diverges from the training ones. Domain generalization (DG) solves this issue by learning representations independent of domain-related information, thus facilitating extrapolation to unseen environments. Existing approaches typically focus on formulating tailored training objectives to extract shared features from the source data. However, the disjointed training and testing procedures may compromise robustness, particularly in the face of unforeseen variations during deployment. In this paper, we propose a novel and holistic framework based on causality, named InPer, designed to enhance model generalization by incorporating causal intervention during training and causal perturbation during testing. Specifically, during the training phase, we employ entropy-based causal intervention (EnIn) to refine the selection of causal variables. To identify samples with anti-interference causal variables from the target domain, we propose a novel metric, homeostatic score, through causal perturbation (HoPer) to construct a prototype classifier in test time. Experimental results across multiple cross-domain tasks confirm the efficacy of InPer.
翻译:尽管深度神经网络已取得显著进展,但当测试环境与训练环境发生偏离时,其性能往往会出现退化。域泛化通过学习独立于领域相关信息的表征来解决这一问题,从而促进模型对未见环境的泛化能力。现有方法通常侧重于设计定制化的训练目标以从源数据中提取共享特征。然而,训练与测试过程的割裂可能削弱模型鲁棒性,尤其在部署过程中面对不可预见的变异时。本文提出一种新颖且基于因果关系的整体框架——InPer,旨在通过训练阶段的因果干预与测试阶段的因果扰动来增强模型泛化能力。具体而言,在训练阶段,我们采用基于熵的因果干预来优化因果变量的选择。为从目标域中识别具有抗干扰因果变量的样本,我们通过因果扰动提出一种新颖的稳态评分度量,用于在测试阶段构建原型分类器。多个跨域任务的实验结果验证了InPer的有效性。