Artificial general intelligence (AGI) is an established field of research. Yet Melanie Mitchell and others have questioned if the term still has meaning. AGI has been subject to so much hype and speculation it has become something of a Rorschach test. Mitchell points out that the debate will only be settled through long term, scientific investigation. To that end here is a short, accessible and provocative overview of AGI. I compare definitions of intelligence, settling on intelligence in terms of adaptation and AGI as an artificial scientist. Taking my queue from Sutton's Bitter Lesson I describe two foundational tools used to build adaptive systems: search and approximation. I compare pros, cons, hybrids and architectures like o3, AlphaGo, AERA, NARS and Hyperon. I then discuss overall meta-approaches to making systems behave more intelligently. I divide them into scale-maxing, simp-maxing, w-maxing based on the Bitter Lesson, Ockham's and Bennett's Razors. These maximise resources, simplicity of form, and the weakness of constraints on functionality. I discuss examples including AIXI, the free energy principle and The Embiggening of language models. I conclude that though scale-maxed approximation dominates, AGI will be a fusion of tools and meta-approaches. The Embiggening was enabled by improvements in hardware. Now the bottlenecks are sample and energy efficiency.
翻译:人工智能通用智能(AGI)是一个成熟的研究领域。然而,梅兰妮·米切尔等人质疑这一术语是否仍具意义。AGI 经历了大量炒作和推测,已成为某种罗夏墨迹测试。米切尔指出,唯有通过长期的科学研究才能解决这一争论。为此,本文提供了一个简短、易懂且具启发性的 AGI 概述。我比较了智能的定义,最终将智能界定为适应能力,并将 AGI 视作人工科学家。借鉴萨顿的“苦涩教训”,我描述了构建自适应系统的两个基础工具:搜索与近似。我比较了 o3、AlphaGo、AERA、NARS 和 Hyperon 等架构的优缺点、混合方案及设计。随后,我讨论了使系统行为更智能的整体元方法。基于“苦涩教训”、奥卡姆剃刀原理和贝内特剃刀原理,我将这些方法分为规模最大化、简化最大化与约束弱化最大化三类,分别旨在最大化资源利用、形式简洁性及功能约束的宽松度。文中探讨的案例包括 AIXI、自由能原理以及语言模型的“巨型化”趋势。我的结论是,尽管规模最大化的近似方法目前占主导地位,但 AGI 将是多种工具与元方法的融合。语言模型的“巨型化”得益于硬件进步,而当前瓶颈在于样本效率与能源效率。