Statistical learning theory is often associated with the principle of Occam's razor, which recommends a simplicity preference in inductive inference. This paper distills the core argument for simplicity obtainable from statistical learning theory, built on the theory's central learning guarantee for the method of empirical risk minimization. This core "means-ends" argument is that a simpler hypothesis class or inductive model is better because it has better learning guarantees; however, these guarantees are model-relative and so the theoretical push towards simplicity is checked by our prior knowledge.
翻译:统计学习理论常与奥卡姆剃刀原理相联系,后者在归纳推理中提倡对简洁性的偏好。本文提炼了可从统计学习理论中获得的关于简洁性的核心论证,该论证建立在该理论对经验风险最小化方法的核心学习保证之上。这一核心的“手段-目的”论证表明,更简单的假设类或归纳模型之所以更优,是因为其具备更好的学习保证;然而,这些保证是相对于模型而言的,因此理论对简洁性的推动会受到我们先验知识的制约。