The dominant narrative of artificial intelligence development assumes that progress is continuous and that capability scales monotonically with model size. We challenge both assumptions. Drawing on punctuated equilibrium theory from evolutionary biology, we show that AI development proceeds not through smooth advancement but through extended periods of stasis interrupted by rapid phase transitions that reorganize the competitive landscape. We identify five such eras since 1943 and four epochs within the current Generative AI Era, each initiated by a discontinuous event -- from the transformer architecture to the DeepSeek Moment -- that rendered the prior paradigm subordinate. To formalize the selection pressures driving these transitions, we develop the Institutional Fitness Manifold, a mathematical framework that evaluates AI systems along four dimensions: capability, institutional trust, affordability, and sovereign compliance. The central result is the Institutional Scaling Law, which proves that institutional fitness is non-monotonic in model scale. Beyond an environment-specific optimum, scaling further degrades fitness as trust erosion and cost penalties outweigh marginal capability gains. This directly contradicts classical scaling laws and carries a strong implication: orchestrated systems of smaller, domain-adapted models can mathematically outperform frontier generalists in most institutional deployment environments. We derive formal conditions under which this inversion holds and present supporting empirical evidence spanning frontier laboratory dynamics, post-training alignment evolution, and the rise of sovereign AI as a geopolitical selection pressure.
翻译:人工智能发展的主流叙事假定进步是连续的,且能力随模型规模单调增长。我们挑战这两个假设。借鉴进化生物学中的间断平衡理论,我们表明AI的发展并非通过平滑的进步,而是通过长期的停滞期被快速相变所打断,这些相变重组了竞争格局。我们识别出自1943年以来的五个这样的时代,以及当前生成式AI时代内的四个纪元,每个纪元都由一个不连续事件(从Transformer架构到DeepSeek时刻)所引发,这些事件使得先前的范式变得从属。为了形式化驱动这些转变的选择压力,我们提出了制度适应度流形,这是一个数学框架,从四个维度评估AI系统:能力、制度信任、可负担性和主权合规性。核心结果是制度适应度标度律,它证明了制度适应度在模型规模上并非单调的。超过环境特定的最优值后,进一步的规模扩大会降低适应度,因为信任侵蚀和成本惩罚超过了边际能力收益。这直接与经典标度律相矛盾,并蕴含一个强烈的推论:在大多数制度部署环境中,由较小的、领域适应模型组成的协调系统在数学上能够超越前沿通用模型。我们推导了这种逆转成立的形式条件,并提供了支持性的经验证据,涵盖前沿实验室动态、训练后对齐演化,以及作为地缘政治选择压力的主权AI的兴起。