Neuromorphic computing seeks to replicate the remarkable efficiency, flexibility, and adaptability of the human brain in artificial systems. Unlike conventional digital approaches, which suffer from the Von Neumann bottleneck and depend on massive computational and energy resources, neuromorphic systems exploit brain-inspired principles of computation to achieve orders of magnitude greater energy efficiency. By drawing on insights from a wide range of disciplines, including artificial intelligence, physics, chemistry, biology, neuroscience, cognitive science and materials science, neuromorphic computing promises to deliver intelligent systems that are sustainable, transparent, and widely accessible. A central challenge, however, is to identify a unifying theoretical framework capable of bridging these diverse disciplines. We argue that dynamical systems theory provides such a foundation. Rooted in differential calculus, it offers a principled language for modeling inference, learning, and control in both natural and artificial substrates. Within this framework, noise can be harnessed as a resource for learning, while differential genetic programming enables the discovery of dynamical systems that implement adaptive behaviors. Embracing this perspective paves the way toward emergent neuromorphic intelligence, where intelligent behavior arises from the dynamics of physical substrates, advancing both the science and sustainability of AI.
翻译:神经形态计算旨在人工系统中复现人脑卓越的效率、灵活性与适应性。与受冯·诺依曼瓶颈制约、依赖海量计算与能源资源的传统数字方法不同,神经形态系统借鉴受大脑启发的计算原理,实现了数量级更高的能效。通过融合人工智能、物理学、化学、生物学、神经科学、认知科学与材料科学等多学科洞见,神经形态计算有望构建可持续、透明且广泛可及的智能系统。然而,其核心挑战在于建立能够贯通这些多元学科的统一理论框架。我们认为,动力系统理论可为此提供基础。该理论根植于微分运算,为自然与人工载体中的推理、学习与控制建模提供了原则性语言。在此框架下,噪声可作为学习资源被有效利用,而微分遗传编程则能发现实现自适应行为的动力系统。接纳这一视角将为涌现性神经形态智能开辟道路——智能行为将从物理载体的动力学中自发产生,从而推动人工智能的科学性与可持续性发展。