Robust Multi-Task Learning (MTL) is crucial for autonomous systems operating in real-world environments, where adverse weather conditions can severely degrade model performance and reliability. In this paper, we introduce RobuMTL, a novel architecture designed to adaptively address visual degradation by dynamically selecting task-specific hierarchical Low-Rank Adaptation (LoRA) modules and a LoRA expert squad based on input perturbations in a mixture-of-experts fashion. Our framework enables adaptive specialization based on input characteristics, improving robustness across diverse real-world conditions. To validate our approach, we evaluated it on the PASCAL and NYUD-v2 datasets and compared it against single-task models, standard MTL baselines, and state-of-the-art methods. On the PASCAL benchmark, RobuMTL delivers a +2.8% average relative improvement under single perturbations and up to +44.4% under mixed weather conditions compared to the MTL baseline. On NYUD-v2, RobuMTL achieves a +9.7% average relative improvement across tasks. The code is available at GitHub.
翻译:鲁棒多任务学习对于在现实世界中运行的自主系统至关重要,因为恶劣天气条件会严重降低模型的性能和可靠性。本文提出RobuMTL,一种新颖的架构,旨在通过基于输入扰动以专家混合的方式动态选择任务特定的分层低秩自适应模块及LoRA专家小组,自适应地应对视觉退化。我们的框架能够根据输入特征实现自适应专业化,从而提升模型在多样化真实环境条件下的鲁棒性。为验证所提方法,我们在PASCAL和NYUD-v2数据集上进行了评估,并与单任务模型、标准多任务学习基线以及前沿方法进行了比较。在PASCAL基准测试中,相较于多任务学习基线,RobuMTL在单一扰动下实现了平均相对性能提升+2.8%,在混合天气条件下提升最高可达+44.4%。在NYUD-v2数据集上,RobuMTL在所有任务上取得了平均+9.7%的相对性能提升。相关代码已在GitHub上开源。