Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the $\alpha$-Jensen-Shannon, $\alpha$-R\'enyi ($0< \alpha < 1$) information between a random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on $\alpha$-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on excess risk of some learning algorithms in the supervised learning context {\blue and the generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distribution mismatch is modeled as $\alpha$-Jensen-Shannon or $\alpha$-R\'enyi divergence between the distribution of test and training data samples distributions.} We also outline the conditions for which our proposed upper bounds might be tighter than other earlier upper bounds.
翻译:泛化误差界对于理解机器学习模型的效果至关重要。本文提出了一种新方法,即辅助分布方法,该方法导出了适用于监督学习场景的期望泛化误差的新上界。我们证明,在特定条件下,这些通用上界可转化为涉及α-Jensen-Shannon散度、α-Rényi散度(0<α<1)信息的新上界,其中信息量由建模训练样本集的随机变量与建模假设集的随机变量之间的信息度量定义。基于α-Jensen-Shannon散度的上界是有限的。此外,我们展示了如何利用辅助分布方法推导监督学习场景下某些学习算法的超额风险上界,以及监督学习算法中分布失配场景下的泛化误差上界——其中分布失配由测试数据分布与训练数据分布之间的α-Jensen-Shannon散度或α-Rényi散度建模。我们还指出了所提出上界可能比现有其他上界更紧的条件。