The present work explores the theoretical limits of Machine Learning (ML) within the framework of Kolmogorov's theory of Algorithmic Probability, which clarifies the notion of entropy as Expected Kolmogorov Complexity and formalizes other fundamental concepts such as Occam's razor via Levin's Universal Distribution. As a fundamental application, we develop Maximum Entropy methods that allow us to derive the Erd\H{o}s-Kac Law and Hardy-Ramanujan theorem in Probabilistic Number Theory, and establish the impossibility of discovering a formula for primes using Machine Learning via the Prime Coding Theorem.
翻译:本研究在柯尔莫哥洛夫算法概率论的框架下探讨了机器学习(ML)的理论极限。该理论通过期望柯尔莫哥洛夫复杂度阐明了熵的概念,并借助莱文通用分布形式化了奥卡姆剃刀等其他基本概念。作为一项基础性应用,我们发展了最大熵方法,从而在概率数论中推导出埃尔德什-卡茨定律与哈代-拉马努金定理,并通过素数编码定理确立了利用机器学习发现素数公式的不可能性。