Most work in the area of learning theory has focused on designing effective Probably Approximately Correct (PAC) learners. Recently, other models of learning such as transductive error have seen more scrutiny. We move toward showing that these problems are equivalent by reducing agnostic learning with a PAC guarantee to agnostic learning with a transductive guarantee by adding a small number of samples to the dataset. We first rederive the result of Aden-Ali et al. arXiv:2304.09167 reducing PAC learning to transductive learning in the realizable setting using simpler techniques and at more generality as background for our main positive result. Our agnostic transductive to PAC conversion technique extends the aforementioned argument to the agnostic case, showing that an agnostic transductive learner can be efficiently converted to an agnostic PAC learner. Finally, we characterize the performance of the agnostic one inclusion graph algorithm of Asilis et al. arXiv:2309.13692 for binary classification, and show that plugging it into our reduction leads to an agnostic PAC learner that is essentially optimal. Our results imply that transductive and PAC learning are essentially equivalent for supervised learning with pseudometric losses in the realizable setting, and for binary classification in the agnostic setting. We conjecture this is true more generally for the agnostic setting.
翻译:学习理论领域的大多数工作集中在设计高效的近似正确(PAC)学习器。近年来,诸如直推误差等其他学习模型受到更多关注。本文通过向数据集中添加少量样本,将具有PAC保证的不可知学习约简为具有直推保证的不可知学习,从而逐步证明这些问题的等价性。我们首先重新推导Aden-Ali等人(arXiv:2304.09167)在可实现设定下将PAC学习约简为直推学习的结果,采用更简单的技术并更具一般性,作为主要正结果的背景。我们的不可知直推到PAC的转换技术将前述论证扩展到不可知情形,表明不可知直推学习器可高效转换为不可知PAC学习器。最后,我们刻画了Asilis等人(arXiv:2309.13692)面向二分类的不可知单包含图算法的性能,并表明将其嵌入我们的约简后,可获得本质上最优的不可知PAC学习器。我们的结果表明:在可实现设定下基于伪度量损失的监督学习中,直推学习与PAC学习本质等价;在不可知设定下二分类任务中亦成立。我们推测这一结论在更一般的不可知设定下同样成立。