Classical scaling laws model AI performance as monotonically improving with model size. We challenge this assumption by deriving the Institutional Scaling Law, showing that institutional fitness -- jointly measuring capability, trust, affordability, and sovereignty -- is non-monotonic in model scale, with an environment-dependent optimum N*(epsilon). Our framework extends the Sustainability Index of Han et al. (2025) from hardware-level to ecosystem-level analysis, proving that capability and trust formally diverge beyond critical scale (Capability-Trust Divergence). We further derive a Symbiogenetic Scaling correction demonstrating that orchestrated systems of domain-specific models can outperform frontier generalists in their native deployment environments. These results are contextualized within a formal evolutionary taxonomy of generative AI spanning five eras (1943-present), with analysis of frontier lab dynamics, sovereign AI emergence, and post-training alignment evolution from RLHF through GRPO. The Institutional Scaling Law predicts that the next phase transition will be driven not by larger models but by better-orchestrated systems of domain-specific models adapted to specific institutional niches.
翻译:经典规模定律将人工智能性能建模为随模型规模单调提升。我们通过推导机构规模定律挑战了这一假设,表明机构适应度——综合衡量能力、信任度、可负担性与自主性——在模型规模上呈非单调性,存在一个与环境相关的最优规模N*(ε)。我们的框架将Han等人(2025)的可持续性指数从硬件层面扩展到生态系统层面分析,证明能力与信任度在超越临界规模后发生形式化背离(能力-信任背离)。我们进一步推导出共生遗传规模修正项,证明领域专用模型的协同系统在其原生部署环境中能够超越前沿通用模型。这些结果被置于一个涵盖五个时代(1943年至今)的生成式人工智能形式化演化分类框架中,同时分析了前沿实验室动态、主权人工智能的兴起,以及从RLHF到GRPO的训练后对齐演化进程。机构规模定律预测,下一阶段的范式转变将不由更大规模的模型驱动,而是由适应特定机构生态位的、更优协同的领域专用模型系统所推动。