Evolving neural network architectures is a computationally demanding process. Traditional methods often require an extensive search through large architectural spaces and offer limited understanding of how structural modifications influence model behavior. This paper introduces \gls{ngspt}, a novel Neuroevolution algorithm based on two key innovations. First, we adapt geometric semantic operators~(GSOs) from genetic programming to neural network evolution, ensuring that architectural changes produce predictable effects on network semantics within a unimodal error surface. Second, we introduce a novel operator (DGSM) that enables controlled reduction of network size, while maintaining the semantic properties of~GSOs. Unlike traditional approaches, \gls{ngspt}'s efficient evaluation mechanism, which only requires computing the semantics of newly added components, allows for efficient population-based training, resulting in a comprehensive exploration of the search space at a fraction of the computational cost. Experimental results on four regression benchmarks show that \gls{ngspt} consistently evolves compact neural networks that achieve performance comparable to or better than established methods in the literature, such as standard neural networks, SLIM-GSGP, TensorNEAT, and SLM.
翻译:神经网络架构演化是一个计算密集型过程。传统方法通常需要在庞大的架构空间中进行广泛搜索,且对结构修改如何影响模型行为的理解有限。本文提出NEVO-GSPT算法,这是一种基于两项关键创新的新型神经演化算法。首先,我们将遗传编程中的几何语义算子(GSOs)适配于神经网络演化,确保架构修改能在单峰误差表面对网络语义产生可预测的影响。其次,我们引入一种新型算子(DGSM),可在保持GSOs语义特性的同时实现对网络规模的受控缩减。与传统方法不同,NEVO-GSPT的高效评估机制仅需计算新增组件的语义,从而支持高效的种群式训练,以极低的计算成本实现搜索空间的全面探索。在四个回归基准测试上的实验结果表明,NEVO-GSPT能持续演化出紧凑的神经网络,其性能达到或优于文献中的现有方法,如标准神经网络、SLIM-GSGP、TensorNEAT和SLM。