The Two-Stage Learning-to-Defer framework has been extensively studied for classification and, more recently, regression tasks. However, many contemporary applications involve both classification and regression in an interdependent manner. In this work, we introduce a novel Two-Stage Learning-to-Defer framework for multi-task learning that jointly addresses these tasks. Our approach leverages a two-stage surrogate loss family, which we prove to be both ($\mathcal{G}, \mathcal{R}$)-consistent and Bayes-consistent, providing strong theoretical guarantees of convergence to the Bayes-optimal rejector. We establish consistency bounds explicitly linked to the cross-entropy surrogate family and the $L_1$-norm of the agents' costs, extending the theoretical minimizability gap analysis to the two-stage setting with multiple experts. We validate our framework on two challenging tasks: object detection, where classification and regression are tightly coupled, and existing methods fail, and electronic health record analysis, in which we highlight the suboptimality of current learning-to-defer approaches.
翻译:两阶段学习延迟框架已在分类任务中得到广泛研究,近期更扩展至回归任务。然而,许多现代应用需要以相互依赖的方式同时处理分类与回归任务。本研究提出一种面向多任务学习的新型两阶段学习延迟框架,可协同处理这两类任务。我们的方法采用两阶段代理损失函数族,经证明该函数族同时满足($\mathcal{G}$, $\mathcal{R}$)一致性与贝叶斯一致性,为收敛至贝叶斯最优拒绝器提供了严格的理论保证。我们建立了与交叉熵代理函数族及智能体成本$L_1$范数显式关联的一致性边界,将理论最小化间隙分析扩展至多专家参与的两阶段场景。我们在两个挑战性任务上验证了框架有效性:在分类与回归紧密耦合且现有方法失效的目标检测任务中,以及在电子健康记录分析任务中——我们揭示了当前学习延迟方法的次优性。