Imbalanced Domain Generalization (IDG) focuses on mitigating both domain and label shifts, both of which fundamentally shape the model's decision boundaries, particularly under heterogeneous long-tailed distributions across domains. Despite its practical significance, it remains underexplored, primarily due to the technical complexity of handling their entanglement and the paucity of theoretical foundations. In this paper, we begin by theoretically establishing the generalization bound for IDG, highlighting the role of posterior discrepancy and decision margin. This bound motivates us to focus on directly steering decision boundaries, marking a clear departure from existing methods. Subsequently, we technically propose a novel Negative-Dominant Contrastive Learning (NDCL) for IDG to enhance discriminability while enforce posterior consistency across domains. Specifically, inter-class decision-boundary separation is enhanced by placing greater emphasis on negatives as the primary signal in our contrastive learning, naturally amplifying gradient signals for minority classes to avoid the decision boundary being biased toward majority classes. Meanwhile, intra-class compactness is encouraged through a re-weighted cross-entropy strategy, and posterior consistency across domains is enforced through a prediction-central alignment strategy. Finally, rigorous yet challenging experiments on benchmarks validate the effectiveness of our NDCL. The code is available at https://github.com/Alrash/NDCL.
翻译:不平衡领域泛化(IDG)旨在同时缓解领域偏移和标签偏移,这两者从根本上塑造了模型的决策边界,尤其是在跨领域的异构长尾分布下。尽管其具有重要的实际意义,但由于处理两者纠缠的技术复杂性以及理论基础的匮乏,该问题仍未得到充分探索。本文首先从理论上建立了IDG的泛化界,强调了后验差异和决策边界的作用。该泛化界促使我们专注于直接引导决策边界,这与现有方法形成了明显区别。随后,我们在技术上提出了一种新颖的负样本主导对比学习(NDCL)方法用于IDG,以增强判别性,同时强制跨领域的后验一致性。具体而言,通过在我们的对比学习中更加强调负样本作为主要信号,增强了类间决策边界的分离,自然地放大了少数类别的梯度信号,以避免决策边界偏向多数类别。同时,通过重新加权的交叉熵策略鼓励类内紧凑性,并通过预测中心对齐策略强制跨领域的后验一致性。最后,在基准测试上进行的严格且具有挑战性的实验验证了我们NDCL方法的有效性。代码可在 https://github.com/Alrash/NDCL 获取。