Current AI governance frameworks, including regulatory benchmarks for accuracy, latency, and energy efficiency, are built for static, centrally trained artificial neural networks on von Neumann hardware. NeuroAI systems, embodied in neuromorphic hardware and implemented via spiking neural networks, break these assumptions. This paper examines the limitations of current AI governance frameworks for NeuroAI, arguing that assurance and audit methods must co-evolve with these architectures, aligning traditional regulatory metrics with the physics, learning dynamics, and embodied efficiency of brain-inspired computation to enable technically grounded assurance.
翻译:当前的人工智能治理框架,包括针对准确性、延迟和能效的监管基准,是为基于冯·诺依曼硬件的静态、集中训练的人工神经网络构建的。神经人工智能系统,以神经形态硬件为载体并通过脉冲神经网络实现,打破了这些假设。本文探讨了当前人工智能治理框架在神经人工智能领域的局限性,主张保障与审计方法必须与这些架构协同演进,将传统监管指标与类脑计算的物理特性、学习动力学及具身效能相结合,从而建立基于技术基础的保障体系。