The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. In response, this paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies and associated operations into uniformly shaped tensors for computation. This advancement facilitates the execution of the NEAT algorithm in a parallelized manner across the entire population. Furthermore, we develop TensorNEAT, a library that implements the tensorized NEAT algorithm and its variants, such as CPPN and HyperNEAT. Building upon JAX, TensorNEAT promotes efficient parallel computations via automated function vectorization and hardware acceleration. Moreover, the TensorNEAT library supports various benchmark environments including Gym, Brax, and gymnax. Through evaluations across a spectrum of robotics control environments in Brax, TensorNEAT achieves up to 500x speedups compared to the existing implementations such as NEAT-Python. Source codes are available at: https://github.com/EMI-Group/tensorneat.
翻译:增强拓扑神经进化(NEAT)算法在神经进化领域获得了广泛认可。该算法的有效性源于从简单网络出发,逐步进化其拓扑结构和权重的策略。尽管其在各类挑战中的能力已得到验证,但算法的计算效率问题仍制约着其可扩展性潜力。为此,本文提出了一种针对NEAT算法的张量化方法,能够将其多样化的网络拓扑及相关操作转化为统一形状的张量进行计算。这一改进使得NEAT算法能够在整个种群中以并行化方式执行。此外,我们开发了TensorNEAT库,实现了张量化NEAT算法及其变体(如CPPN和HyperNEAT)。基于JAX框架,TensorNEAT通过自动化函数向量化和硬件加速实现了高效的并行计算。同时,TensorNEAT库支持包括Gym、Brax和gymnax在内的多种基准测试环境。通过在Brax中一系列机器人控制环境上的评估,与NEAT-Python等现有实现相比,TensorNEAT实现了最高500倍的加速效果。源代码已公开于:https://github.com/EMI-Group/tensorneat。