Multi-task learning (MTL) is critical in real-world applications such as autonomous driving and robotics, enabling simultaneous handling of diverse tasks. However, obtaining fully annotated data for all tasks is impractical due to labeling costs. Existing methods for partially labeled MTL typically rely on predictions from unlabeled tasks, making it difficult to establish reliable task associations and potentially leading to negative transfer and suboptimal performance. To address these issues, we propose a prototype-based knowledge retrieval framework that achieves robust MTL instead of relying on predictions from unlabeled tasks. Our framework consists of two key components: (1) a task prototype embedding task-specific characteristics and quantifying task associations, and (2) a knowledge retrieval transformer that adaptively refines feature representations based on these associations. To achieve this, we introduce an association knowledge generating (AKG) loss to ensure the task prototype consistently captures task-specific characteristics. Extensive experiments demonstrate the effectiveness of our framework, highlighting its potential for robust multi-task learning, even when only a subset of tasks is annotated.
翻译:多任务学习(MTL)在自动驾驶和机器人等现实应用中至关重要,它能够同时处理多种任务。然而,由于标注成本高昂,获取所有任务的完全标注数据是不切实际的。现有的部分标注多任务学习方法通常依赖于对未标注任务的预测,这使得难以建立可靠的任务关联,并可能导致负迁移和次优性能。为解决这些问题,我们提出了一种基于原型的知识检索框架,该框架通过检索而非依赖未标注任务的预测,来实现鲁棒的多任务学习。我们的框架包含两个关键组件:(1)一个任务原型,用于嵌入任务特定特征并量化任务关联;(2)一个知识检索Transformer,基于这些关联自适应地精炼特征表示。为此,我们引入了关联知识生成(AKG)损失,以确保任务原型能够持续捕捉任务特定特征。大量实验证明了我们框架的有效性,突显了其在即使只有部分任务被标注的情况下,仍能实现鲁棒多任务学习的潜力。