Fine-grained domain generalization (FGDG) is a more challenging task than traditional DG tasks due to its small inter-class variations and relatively large intra-class disparities. When domain distribution changes, the vulnerability of subtle features leads to a severe deterioration in model performance. Nevertheless, humans inherently demonstrate the capacity for generalizing to out-of-distribution data, leveraging structured multi-granularity knowledge that emerges from discerning the commonality and specificity within categories. Likewise, we propose a Feature Structuralized Domain Generalization (FSDG) model, wherein features experience structuralization into common, specific, and confounding segments, harmoniously aligned with their relevant semantic concepts, to elevate performance in FGDG. Specifically, feature structuralization (FS) is accomplished through joint optimization of five constraints: a decorrelation function applied to disentangled segments, three constraints ensuring common feature consistency and specific feature distinctiveness, and a prediction calibration term. By imposing these stipulations, FSDG is prompted to disentangle and align features based on multi-granularity knowledge, facilitating robust subtle distinctions among categories. Extensive experimentation on three benchmarks consistently validates the superiority of FSDG over state-of-the-art counterparts, with an average improvement of 6.2% in FGDG performance. Beyond that, the explainability analysis on explicit concept matching intensity between the shared concepts among categories and the model channels, along with experiments on various mainstream model architectures, substantiates the validity of FS.
翻译:细粒度领域泛化(FGDG)由于类间差异小而类内差异相对较大,相比传统领域泛化任务更具挑战性。当领域分布发生变化时,细微特征的脆弱性会导致模型性能严重下降。然而,人类天生具备泛化至分布外数据的能力,这得益于通过辨别类别内共性与特性所形成的结构化多粒度知识。受此启发,我们提出特征结构化领域泛化(FSDG)模型,通过将特征结构化分解为共性、特性及混淆分量,并使其与相关语义概念协调对齐,以提升FGDG性能。具体而言,特征结构化(FS)通过联合优化五个约束条件实现:应用于解耦分量的去相关函数、确保共性特征一致性与特性特征区分性的三个约束项,以及预测校准项。通过这些约束,FSDG能够基于多粒度知识解耦并对齐特征,从而增强类别间细微差异的鲁棒性辨识能力。在三个基准数据集上的大量实验一致验证了FSDG相对于现有最优方法的优越性,其FGDG性能平均提升6.2%。此外,通过对类别间共享概念与模型通道间显式概念匹配强度的可解释性分析,以及在多种主流模型架构上的实验,进一步证实了FS的有效性。