In recent years there has been a tremendous surge in the general capabilities of AI systems, mainly fuelled by training foundation models on internetscale data. Nevertheless, the creation of openended, ever self-improving AI remains elusive. In this position paper, we argue that the ingredients are now in place to achieve openendedness in AI systems with respect to a human observer. Furthermore, we claim that such open-endedness is an essential property of any artificial superhuman intelligence (ASI). We begin by providing a concrete formal definition of open-endedness through the lens of novelty and learnability. We then illustrate a path towards ASI via open-ended systems built on top of foundation models, capable of making novel, humanrelevant discoveries. We conclude by examining the safety implications of generally-capable openended AI. We expect that open-ended foundation models will prove to be an increasingly fertile and safety-critical area of research in the near future.
翻译:近年来,人工智能系统的通用能力取得了巨大飞跃,这主要得益于在互联网规模数据上训练基础模型。然而,创建开放式、持续自我改进的人工智能仍然难以实现。在本立场论文中,我们认为,从人类观察者的角度来看,实现人工智能系统开放性的条件现已具备。此外,我们主张这种开放性是任何人工超级智能(ASI)的基本属性。我们首先通过新颖性和可学习性的视角,为开放性提供了一个具体的正式定义。接着,我们阐述了一条通过建立在基础模型之上的开放式系统实现人工超级智能的路径,这些系统能够做出新颖的、与人类相关的发现。最后,我们探讨了具备通用能力的开放式人工智能的安全影响。我们预计,开放式基础模型在不久的将来将成为一个日益富有成果且安全至关重要的研究领域。