We study integer-valued multiplicative dynamics driven by i.i.d. prime multipliers and connect their macroscopic statistics to universal codelengths. We introduce the Multiplicative Turing Ensemble (MTE) and show how it arises naturally - though not uniquely - from ensembles of probabilistic Turing machines. Our modeling principle is variational: taking Elias' Omega codelength as an energy and imposing maximum entropy constraints yields a canonical Gibbs prior on integers and, by restriction, on primes. Under mild tail assumptions, this prior induces exponential tails for log-multipliers (up to slowly varying corrections), which in turn generate Pareto tails for additive gaps. We also prove time-average laws for the Omega codelength along MTE trajectories. Empirically, on Debian and PyPI package size datasets, a scaled Omega prior achieves the lowest KL divergence against codelength histograms. Taken together, the theory-data comparison suggests a qualitative split: machine-adapted regimes (Gibbs-aligned, finite first moment) exhibit clean averaging behavior, whereas human-generated complexity appears to sit beyond this regime, with tails heavy enough to produce an unbounded first moment, and therefore no averaging of the same kind.
翻译:我们研究由独立同分布素数乘子驱动的整数值乘法动力学,并将其宏观统计量与通用编码长度相联系。我们引入了乘法图灵系综(MTE),并阐明其如何自然(尽管非唯一地)从概率图灵机系综中产生。我们的建模原则是变分性的:以Elias Omega编码长度作为能量函数并施加最大熵约束,可在整数集上产生一个规范的吉布斯先验分布,并通过限制得到素数集上的先验。在温和的尾部假设下,该先验导致对数乘子具有指数尾部(至多存在缓变修正),进而为加法间隙生成帕累托尾部。我们还证明了Omega编码长度沿MTE轨迹的时间平均定律。实证研究表明,在Debian和PyPI软件包规模数据集上,缩放后的Omega先验在编码长度直方图上实现了最低的KL散度。综合来看,理论与数据的比较揭示了一种定性划分:机器适应机制(吉布斯对齐、一阶矩有限)展现出清晰的平均行为,而人类生成的复杂性似乎位于该机制之外,其尾部足够厚重导致一阶矩无界,因而无法产生同类型的平均行为。