While multi-task learning (MTL) has gained significant attention in recent years, its underlying mechanisms remain poorly understood. Recent methods did not yield consistent performance improvements over single task learning (STL) baselines, underscoring the importance of gaining more profound insights about challenges specific to MTL. In our study, we investigate paradigms in MTL in the context of STL: First, the impact of the choice of optimizer has only been mildly investigated in MTL. We show the pivotal role of common STL tools such as the Adam optimizer in MTL empirically in various experiments. To further investigate Adam's effectiveness, we theoretical derive a partial loss-scale invariance under mild assumptions. Second, the notion of gradient conflicts has often been phrased as a specific problem in MTL. We delve into the role of gradient conflicts in MTL and compare it to STL. For angular gradient alignment we find no evidence that this is a unique problem in MTL. We emphasize differences in gradient magnitude as the main distinguishing factor. Overall, we find surprising similarities between STL and MTL suggesting to consider methods from both fields in a broader context.
翻译:尽管多任务学习(MTL)近年来受到广泛关注,但其内在机制仍未被充分理解。现有方法未能持续超越单任务学习(STL)基线,这凸显了深入理解MTL特有挑战的重要性。本研究在STL背景下系统考察MTL范式:首先,优化器选择对MTL的影响尚未得到充分研究。我们通过系列实验实证表明,Adam优化器等常见STL工具在MTL中具有关键作用。为深入探究Adam的有效性,我们在温和假设下理论推导出部分损失尺度不变性。其次,梯度冲突常被视为MTL的特有问题。我们深入剖析梯度冲突在MTL中的作用,并与STL进行对比。针对角度梯度对齐现象,我们发现其并非MTL独有问题,并强调梯度模长差异才是主要区分因素。总体而言,STL与MTL之间存在惊人的相似性,这提示我们应在更广阔的视野中综合考量两个领域的方法。