Domain Generalized Semantic Segmentation (DGSS) seeks to utilize source domain data exclusively to enhance the generalization of semantic segmentation across unknown target domains. Prevailing studies predominantly concentrate on feature normalization and domain randomization, these approaches exhibit significant limitations. Feature normalization-based methods tend to confuse semantic features in the process of constraining the feature space distribution, resulting in classification misjudgment. Domain randomization-based methods frequently incorporate domain-irrelevant noise due to the uncontrollability of style transformations, resulting in segmentation ambiguity. To address these challenges, we introduce a novel framework, named SCSD for Semantic Consistency prediction and Style Diversity generalization. It comprises three pivotal components: Firstly, a Semantic Query Booster is designed to enhance the semantic awareness and discrimination capabilities of object queries in the mask decoder, enabling cross-domain semantic consistency prediction. Secondly, we develop a Text-Driven Style Transform module that utilizes domain difference text embeddings to controllably guide the style transformation of image features, thereby increasing inter-domain style diversity. Lastly, to prevent the collapse of similar domain feature spaces, we introduce a Style Synergy Optimization mechanism that fortifies the separation of inter-domain features and the aggregation of intra-domain features by synergistically weighting style contrastive loss and style aggregation loss. Extensive experiments demonstrate that the proposed SCSD significantly outperforms existing state-of-theart methods. Notably, SCSD trained on GTAV achieved an average of 49.11 mIoU on the four unseen domain datasets, surpassing the previous state-of-the-art method by +4.08 mIoU. Code is available at https://github.com/nhw649/SCSD.
翻译:领域泛化语义分割旨在仅利用源域数据提升语义分割模型在未知目标域上的泛化能力。现有研究主要集中于特征归一化与领域随机化,这些方法存在明显局限。基于特征归一化的方法在约束特征空间分布过程中易混淆语义特征,导致分类误判;而基于领域随机化的方法因风格变换的不可控性常引入与领域无关的噪声,造成分割模糊。为应对这些挑战,本文提出一种新颖框架SCSD(语义一致性预测与风格多样性泛化),其包含三个关键组件:首先,设计语义查询增强器以提升掩码解码器中对象查询的语义感知与判别能力,实现跨域语义一致性预测;其次,开发文本驱动风格变换模块,利用领域差异文本嵌入可控地引导图像特征的风格变换,从而增强域间风格多样性;最后,为防止相似域特征空间坍缩,引入风格协同优化机制,通过协同加权风格对比损失与风格聚合损失,强化域间特征分离与域内特征聚合。大量实验表明,所提SCSD方法显著优于现有先进方法。值得注意的是,在GTAV数据上训练的SCSD在四个未见域数据集上平均达到49.11 mIoU,较先前最优方法提升+4.08 mIoU。代码公开于https://github.com/nhw649/SCSD。