While multi-task learning (MTL) has gained significant attention in recent years, its underlying mechanisms remain poorly understood. Recent methods did not yield consistent performance improvements over single task learning (STL) baselines, underscoring the importance of gaining more profound insights about challenges specific to MTL. In our study, we investigate paradigms in MTL in the context of STL: First, the impact of the choice of optimizer has only been mildly investigated in MTL. We show the pivotal role of common STL tools such as the Adam optimizer in MTL empirically in various experiments. To further investigate Adam's effectiveness, we theoretical derive a partial loss-scale invariance under mild assumptions. Second, the notion of gradient conflicts has often been phrased as a specific problem in MTL. We delve into the role of gradient conflicts in MTL and compare it to STL. For angular gradient alignment we find no evidence that this is a unique problem in MTL. We emphasize differences in gradient magnitude as the main distinguishing factor. Overall, we find surprising similarities between STL and MTL suggesting to consider methods from both fields in a broader context.
翻译:尽管多任务学习(MTL)近年来受到广泛关注,但其内在机制仍未被充分理解。近期的方法并未在单任务学习(STL)基线基础上取得一致的性能提升,这凸显了深入理解MTL特有挑战的重要性。在本研究中,我们在STL的背景下探究MTL中的范式:首先,优化器选择的影响在MTL中仅得到初步探讨。我们通过一系列实验,实证证明了常见STL工具(如Adam优化器)在MTL中的关键作用。为深入探究Adam的有效性,我们在温和假设下从理论上推导出部分损失尺度不变性。其次,梯度冲突的概念常被表述为MTL的特有问题。我们深入研究了梯度冲突在MTL中的作用,并将其与STL进行比较。对于角度梯度对齐,我们未发现证据表明这是MTL独有的问题。我们强调梯度幅度的差异是主要区分因素。总体而言,我们发现STL与MTL之间存在惊人的相似性,这提示应在更广泛的背景下综合考虑两个领域的方法。