Domain generalization(DG) endeavors to develop robust models that possess strong generalizability while preserving excellent discriminability. Nonetheless, pivotal DG techniques tend to improve the feature generalizability by learning domain-invariant representations, inadvertently overlooking the feature discriminability. On the one hand, the simultaneous attainment of generalizability and discriminability of features presents a complex challenge, often entailing inherent contradictions. This challenge becomes particularly pronounced when domain-invariant features manifest reduced discriminability owing to the inclusion of unstable factors, i.e., spurious correlations. On the other hand, prevailing domain-invariant methods can be categorized as category-level alignment, susceptible to discarding indispensable features possessing substantial generalizability and narrowing intra-class variations. To surmount these obstacles, we rethink DG from a new perspective that concurrently imbues features with formidable discriminability and robust generalizability, and present a novel framework, namely, Discriminative Microscopic Distribution Alignment~(DMDA). DMDA incorporates two core components: Selective Channel Pruning~(SCP) and Micro-level Distribution Alignment~(MDA). Concretely, SCP attempts to curtail redundancy within neural networks, prioritizing stable attributes conducive to accurate classification. This approach alleviates the adverse effect of spurious domain invariance and amplifies the feature discriminability. Besides, MDA accentuates micro-level alignment within each class, going beyond mere category-level alignment. Extensive experiments on four benchmark datasets corroborate that DMDA achieves comparable results to state-of-the-art methods in DG, underscoring the efficacy of our method.
翻译:域泛化(DG)致力于开发具有强大泛化能力同时保持优异判别性的鲁棒模型。然而,关键的DG技术倾向于通过学习域不变表示来提升特征泛化性,却无意中忽视了特征判别性。一方面,同时实现特征的泛化性与判别性是一项复杂的挑战,往往伴随着固有的矛盾。当域不变特征因包含不稳定因素(即伪相关性)而表现出降低的判别性时,这一挑战变得尤为突出。另一方面,主流的域不变方法可归类为类别级对齐,容易丢弃具有显著泛化性的必要特征并缩小类内差异。为克服这些障碍,我们从一个新视角重新思考DG,该视角同时赋予特征强大的判别性与鲁棒的泛化性,并提出一种新颖框架——判别性微观分布对齐(DMDA)。DMDA包含两个核心组件:选择性通道剪枝(SCP)与微观级分布对齐(MDA)。具体而言,SCP旨在减少神经网络中的冗余,优先考虑有助于准确分类的稳定属性。该方法减轻了伪域不变性的不利影响,并增强了特征判别性。此外,MDA强调每个类别内的微观级对齐,超越了单纯的类别级对齐。在四个基准数据集上的大量实验证实,DMDA在DG中取得了与最先进方法相当的结果,凸显了我们方法的有效性。