The Learning-to-Defer approach has been explored for classification and, more recently, regression tasks separately. Many contemporary learning tasks, however, involves both classification and regression components. In this paper, we introduce a Learning-to-Defer approach for multi-task learning that encompasses both classification and regression tasks. Our two-stage approach utilizes a rejector that defers decisions to the most accurate agent among a pre-trained joint classifier-regressor models and one or more external experts. We show that our surrogate loss is $(\mathcal{H}, \mathcal{F}, \mathcal{R})$ and Bayes--consistent, ensuring an effective approximation of the optimal solution. Additionally, we derive learning bounds that demonstrate the benefits of employing multiple confident experts along a rich model in a two-stage learning framework. Empirical experiments conducted on electronic health record analysis tasks underscore the performance enhancements achieved through our method.
翻译:学习延迟方法已在分类任务中得到了探索,最近也单独应用于回归任务。然而,许多当代学习任务同时涉及分类和回归部分。本文提出了一种适用于多任务学习的学习延迟方法,该方法同时涵盖分类和回归任务。我们的两阶段方法利用一个拒绝器,将决策延迟给预训练的联合分类-回归模型以及一个或多个外部专家中最准确的代理。我们证明了我们的代理损失是$(\mathcal{H}, \mathcal{F}, \mathcal{R})$且贝叶斯一致的,从而确保了对最优解的有效逼近。此外,我们推导了学习界限,证明了两阶段学习框架中采用多个自信专家与丰富模型相结合的优势。在电子健康记录分析任务上进行的实证实验凸显了通过我们的方法实现的性能提升。