Hyperparameter tuning is a critical yet computationally expensive step in training neural networks, particularly when the search space is high dimensional and nonconvex. Metaheuristic optimization algorithms are often used for this purpose due to their derivative free nature and robustness against local optima. In this work, we propose Golden Eagle Genetic Optimization (GEGO), a hybrid metaheuristic that integrates the population movement strategy of Golden Eagle Optimization with the genetic operators of selection, crossover, and mutation. The main novelty of GEGO lies in embedding genetic operators directly into the iterative search process of GEO, rather than applying them as a separate evolutionary stage. This design improves population diversity during search and reduces premature convergence while preserving the exploration behavior of GEO. GEGO is evaluated on standard unimodal, multimodal, and composite benchmark functions from the CEC2017 suite, where it consistently outperforms its constituent algorithms and several classical metaheuristics in terms of solution quality and robustness. The algorithm is further applied to hyperparameter tuning of artificial neural networks on the MNIST dataset, where GEGO achieves improved classification accuracy and more stable convergence compared to GEO and GA. These results indicate that GEGO provides a balanced exploration-exploitation tradeoff and is well suited for hyperparameter optimization under constrained computational settings.
翻译:超参数调优是训练神经网络的关键步骤,但其计算成本高昂,尤其在搜索空间为高维非凸时更为显著。元启发式优化算法因其无需导数且对局部最优解具有鲁棒性,常被用于此目的。本研究提出金鹰遗传优化算法(GEGO),这是一种混合元启发式算法,将金鹰优化算法的种群移动策略与遗传算法的选择、交叉和变异算子相结合。GEGO的主要创新点在于将遗传算子直接嵌入到GEO的迭代搜索过程中,而非将其作为独立的进化阶段应用。该设计提升了搜索过程中的种群多样性,减少了早熟收敛,同时保留了GEO的探索行为。我们在CEC2017测试集中的标准单峰、多峰及复合基准函数上评估了GEGO,结果表明其在解的质量和鲁棒性方面均持续优于其构成算法及多种经典元启发式算法。进一步将GEGO应用于MNIST数据集上人工神经网络的超参数调优,与GEO和GA相比,GEGO获得了更高的分类精度和更稳定的收敛性。这些结果表明,GEGO在探索与利用之间取得了良好平衡,非常适用于计算资源受限条件下的超参数优化。