Two seminal papers--Alon, Livni, Malliaris, Moran (STOC 2019) and Bun, Livni, and Moran (FOCS 2020)--established the equivalence between online learnability and globally stable PAC learnability in binary classification. However, Chase, Chornomaz, Moran, and Yehudayoff (STOC 2024) recently showed that this equivalence does not hold in the agnostic setting. Specifically, they proved that in the agnostic setting, only finite hypothesis classes are globally stable learnable. Therefore, agnostic global stability is too restrictive to capture interesting hypothesis classes. To address this limitation, Chase \emph{et al.} introduced two relaxations of agnostic global stability. In this paper, we characterize the classes that are learnable under their proposed relaxed conditions, resolving the two open problems raised in their work. First, we prove that in the setting where the stability parameter can depend on the excess error (the gap between the learner's error and the best achievable error by the hypothesis class), agnostic stability is fully characterized by the Littlestone dimension. Consequently, as in the realizable case, this form of learnability is equivalent to online learnability. As part of the proof of this theorem, we strengthen the celebrated result of Bun et al. by showing that classes with infinite Littlestone dimension are not stably PAC learnable, even if we allow the stability parameter to depend on the excess error. For the second relaxation proposed by Chase et al., we prove that only finite hypothesis classes are globally stable learnable even if we restrict the agnostic setting to distributions with small population loss.
翻译:两篇开创性论文——Alon、Livni、Malliaris、Moran(STOC 2019)与Bun、Livni、Moran(FOCS 2020)——确立了二分类问题中在线可学习性与全局稳定PAC可学习性的等价关系。然而,Chase、Chornomaz、Moran、Yehudayoff(STOC 2024)近期证明该等价关系在不可知学习设定中并不成立。具体而言,他们证明了在不可知设定下,仅有限假设类才具有全局稳定可学习性。因此,不可知全局稳定性条件过于严格,无法涵盖具有研究价值的假设类。为突破这一局限,Chase等人提出了两种不可知全局稳定性的松弛条件。本文系统刻画了在其所提松弛条件下可学习的假设类,从而解决了他们工作中遗留的两个开放性问题。首先,我们证明在稳定性参数可依赖于超额误差(即学习者误差与假设类最优可达误差之间的差距)的设定下,不可知稳定性完全由Littlestone维度所刻画。因此,与可实现情形类似,这种形式的可学习性等价于在线可学习性。作为该定理证明的重要组成部分,我们强化了Bun等人的著名结论,证明了即使允许稳定性参数依赖于超额误差,具有无限Littlestone维度的假设类仍不具备稳定PAC可学习性。针对Chase等人提出的第二种松弛条件,我们证明即使将不可知设定限制在总体损失较小的分布上,仍然仅有有限假设类具有全局稳定可学习性。