Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space, or parameter transfer. To provide sufficient learning support, modern MTL uses annotated data with full, or sufficiently large overlap across tasks, i.e., each input sample is annotated for all, or most of the tasks. However, collecting such annotations is prohibitive in many real applications, and cannot benefit from datasets available for individual tasks. In this work, we challenge this setup and show that MTL can be successful with classification tasks with little, or non-overlapping annotations, or when there is big discrepancy in the size of labeled data per task. We explore task-relatedness for co-annotation and co-training, and propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching. To demonstrate the general applicability of our method, we conducted diverse case studies in the domains of affective computing, face recognition, species recognition, and shopping item classification using nine datasets. Our large-scale study of affective tasks for basic expression recognition and facial action unit detection illustrates that our approach is network agnostic and brings large performance improvements compared to the state-of-the-art in both tasks and across all studied databases. In all case studies, we show that co-training via task-relatedness is advantageous and prevents negative transfer (which occurs when MT model's performance is worse than that of at least one single-task model).
翻译:多任务学习(MTL)是一种框架,通过共享表示空间或参数迁移,使多个相关任务能够联合学习并获益。为了提供充分的学习支持,现代MTL采用各任务间完全或大规模重叠的标注数据,即每个输入样本均对所有或大多数任务进行标注。然而,在许多实际应用中收集此类标注成本过高,且无法利用面向单个任务的数据集。本研究挑战这一传统设定,证明在分类任务的标注重叠度极低甚至无重叠,或各任务标注数据规模存在显著差异的情况下,MTL仍能取得成功。我们探索基于任务相关性的联合标注与协同训练方法,并提出一种通过分布匹配实现任务间知识交换的创新方案。为验证方法的通用适用性,我们基于九个数据集,在情感计算、人脸识别、物种识别及购物商品分类领域开展了多样化的案例研究。针对基本表情识别与面部动作单元检测的情感任务大规模研究表明,本方法具有网络无关性,且在所有研究数据库中相比当前最优方法均带来显著性能提升。所有案例均证明,基于任务相关性的协同训练能有效避免负迁移(即多任务模型性能低于至少一个单任务模型的现象)。