Despite the considerable advancements achieved by deep neural networks, their performance tends to degenerate when the test environment diverges from the training ones. Domain generalization (DG) solves this issue by learning representations independent of domain-related information, thus facilitating extrapolation to unseen environments. Existing approaches typically focus on formulating tailored training objectives to extract shared features from the source data. However, the disjointed training and testing procedures may compromise robustness, particularly in the face of unforeseen variations during deployment. In this paper, we propose a novel and holistic framework based on causality, named InPer, designed to enhance model generalization by incorporating causal intervention during training and causal perturbation during testing. Specifically, during the training phase, we employ entropy-based causal intervention (EnIn) to refine the selection of causal variables. To identify samples with anti-interference causal variables from the target domain, we propose a novel metric, homeostatic score, through causal perturbation (HoPer) to construct a prototype classifier in test time. Experimental results across multiple cross-domain tasks confirm the efficacy of InPer.
翻译:尽管深度神经网络已取得显著进展,但当测试环境与训练环境存在差异时,其性能往往会下降。领域泛化通过学习独立于领域相关信息的表示来解决这一问题,从而促进模型对未见环境的泛化能力。现有方法通常侧重于设计特定的训练目标以从源数据中提取共享特征。然而,训练与测试过程的分离可能损害模型的鲁棒性,特别是在部署过程中面对未预见的变异时。本文提出了一种基于因果关系的全新整体框架——InPer,旨在通过训练阶段的因果干预与测试阶段的因果扰动来增强模型的泛化能力。具体而言,在训练阶段,我们采用基于熵的因果干预来优化因果变量的选择。为了从目标域中识别具有抗干扰因果变量的样本,我们通过因果扰动提出了一种新颖的稳态评分度量,用于在测试阶段构建原型分类器。在多个跨领域任务上的实验结果验证了InPer的有效性。