Artificial intelligence (AI) faces a trifecta of grand challenges: the Energy Wall, the Alignment Problem and the Leap from Narrow AI to AGI. We present SAGI, a Systematic Approach to AGI that utilizes system design principles to overcome the energy wall and alignment challenges. This paper asserts that AGI can be realized through multiplicity of design specific pathways and customized through system design rather than a singular overarching architecture. AGI systems may exhibit diver architectural configurations and capabilities, contingent upon their intended use cases. Alignment, a challenge broadly recognized as AIs most formidable, is the one that depends most critically on system design and serves as its primary driving force as a foundational criterion for AGI. Capturing the complexities of human morality for alignment requires architectural support to represent the intricacies of moral decision-making and the pervasive ethical processing at every level, with performance reliability exceeding that of human moral judgment. Hence, requiring a more robust architecture towards safety and alignment goals, without replicating or resembling the human brain. We argue that system design (such as feedback loops, energy and performance optimization) on learning substrates (capable of learning its system architecture) is more fundamental to achieving AGI goals and guarantees, superseding classical symbolic, emergentist and hybrid approaches. Through learning of the system architecture itself, the resulting AGI is not a product of spontaneous emergence but of systematic design and deliberate engineering, with core features, including an integrated moral architecture, deeply embedded within its architecture. The approach aims to guarantee design goals such as alignment, efficiency by self-learning system architecture.
翻译:人工智能面临三大核心挑战:能源壁垒、对齐问题以及从狭义AI到通用人工智能的跨越。本文提出SAGI——一种基于系统设计原则以克服能源壁垒与对齐挑战的通用人工智能系统化实现路径。本文主张,通用人工智能可通过多样化的特定设计路径实现,并依托系统设计进行定制化构建,而非依赖单一普适架构。通用人工智能系统可能呈现差异化的架构配置与能力特征,这取决于其目标应用场景。对齐问题——被广泛视为人工智能领域最严峻的挑战——其解决高度依赖于系统设计,并作为通用人工智能的基础准则成为驱动系统设计的核心动力。为捕捉人类道德复杂性以实现对齐,需要架构层面支持道德决策的精细表征及全层级伦理处理机制,其性能可靠性需超越人类道德判断。这要求构建更稳健的架构以实现安全与对齐目标,且无需复现或模仿人脑结构。我们认为,在学习基底(具备学习其系统架构的能力)上进行的系统设计(如反馈循环、能源与性能优化)对于实现通用人工智能目标与保障机制具有更根本的意义,其重要性超越经典符号主义、涌现主义及混合方法。通过对系统架构本身的学习,最终形成的通用人工智能并非自发涌现的产物,而是系统化设计与精密工程的成果,其核心特征(包括集成道德架构)已深度嵌入系统架构之中。该方法旨在通过系统架构的自学习机制,保障对齐性、能效性等设计目标的实现。