Multi-domain learning (MDL) aims to train a model with minimal average risk across multiple overlapping but non-identical domains. To tackle the challenges of dataset bias and domain domination, numerous MDL approaches have been proposed from the perspectives of seeking commonalities by aligning distributions to reduce domain gap or reserving differences by implementing domain-specific towers, gates, and even experts. MDL models are becoming more and more complex with sophisticated network architectures or loss functions, introducing extra parameters and enlarging computation costs. In this paper, we propose a frustratingly easy and hyperparameter-free multi-domain learning method named Decoupled Training (D-Train). D-Train is a tri-phase general-to-specific training strategy that first pre-trains on all domains to warm up a root model, then post-trains on each domain by splitting into multi-heads, and finally fine-tunes the heads by fixing the backbone, enabling decouple training to achieve domain independence. Despite its extraordinary simplicity and efficiency, D-Train performs remarkably well in extensive evaluations of various datasets from standard benchmarks to applications of satellite imagery and recommender systems.
翻译:多领域学习旨在训练一个模型,使其在多个重叠但非同一的领域上平均风险最小化。为应对数据集偏差和领域主导的挑战,众多多领域学习方法从不同角度被提出:通过对齐分布缩小领域差距以寻求共性,或通过实现领域特定的塔、门甚至专家来保留差异。随着网络架构或损失函数的复杂化,多领域模型日益复杂,引入了额外参数并增加了计算成本。本文提出一种极其简易且无需超参数的多领域学习方法——解耦训练。D-Train是一种三阶段通用到特定的训练策略:首先在所有领域上进行预训练以预热根模型;然后通过分裂为多头在每个领域上进行后训练;最后固定骨干网络微调各头,从而实现解耦训练以达到领域独立性。尽管其异常简单高效,D-Train在从标准基准到卫星影像和推荐系统应用的各种数据集的广泛评估中表现卓越。