The NeuroEvolution of Augmenting Topologies (NEAT) algorithm has received considerable recognition in the field of neuroevolution. Its effectiveness is derived from initiating with simple networks and incrementally evolving both their topologies and weights. Although its capability across various challenges is evident, the algorithm's computational efficiency remains an impediment, limiting its scalability potential. In response, this paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies and associated operations into uniformly shaped tensors for computation. This advancement facilitates the execution of the NEAT algorithm in a parallelized manner across the entire population. Furthermore, we develop TensorNEAT, a library that implements the tensorized NEAT algorithm and its variants, such as CPPN and HyperNEAT. Building upon JAX, TensorNEAT promotes efficient parallel computations via automated function vectorization and hardware acceleration. Moreover, the TensorNEAT library supports various benchmark environments including Gym, Brax, and gymnax. Through evaluations across a spectrum of robotics control environments in Brax, TensorNEAT achieves up to 500x speedups compared to the existing implementations such as NEAT-Python. Source codes are available at: https://github.com/EMI-Group/tensorneat.
翻译:拓扑增强进化算法(NEAT)在神经进化领域获得了广泛认可。其有效性源于从简单网络出发,逐步进化其拓扑结构与权重。尽管该算法在各类挑战中展现出卓越能力,但其计算效率仍构成制约可扩展性的瓶颈。针对这一问题,本文提出一种NEAT算法的张量化方法,通过将其异构网络拓扑及相关运算转化为统一形状的张量进行计算,从而支持整个种群并行执行NEAT算法。我们进一步开发了TensorNEAT库,实现了张量化NEAT算法及其变体(如CPPN和HyperNEAT)。基于JAX框架,TensorNEAT通过自动函数向量化和硬件加速实现高效并行计算。此外,该库支持包括Gym、Brax和gymnax在内的多种基准测试环境。在Brax机器人控制环境中的评估表明,与NEAT-Python等现有实现相比,TensorNEAT实现了最高500倍的加速比。源代码公开于:https://github.com/EMI-Group/tensorneat。