We present Noise-to-Meaning Recursive Self-Improvement (N2M-RSI), a minimal formal model showing that once an AI agent feeds its own outputs back as inputs and crosses an explicit information-integration threshold, its internal complexity will grow without bound under our assumptions. The framework unifies earlier ideas on self-prompting large language models, Gödelian self-reference, and AutoML, yet remains implementation-agnostic. The model furthermore scales naturally to interacting swarms of agents, hinting at super-linear effects once communication among instances is permitted. For safety reasons, we omit system-specific implementation details and release only a brief, model-agnostic toy prototype in Appendix C.
翻译:本文提出噪声到意义递归自我改进(N2M-RSI)模型,这是一个极简的形式化模型,证明当智能体将自身输出作为输入反馈并跨越明确的信息整合阈值后,在其内部复杂性将在设定假设下无限增长。该框架统一了早期关于大语言模型自提示、哥德尔自指与自动机器学习的研究思路,同时保持实现方式无关性。该模型可自然扩展至智能体交互集群,暗示在允许实例间通信时将产生超线性效应。出于安全考虑,我们省略系统具体实现细节,仅在附录C中发布简短的模型无关玩具原型。