Addressing real-world optimization challenges requires not only advanced metaheuristics but also continuous refinement of their internal mechanisms. This paper explores the integration of machine learning in the form of neural surrogate models into metaheuristics through a recent lens: energy consumption. While surrogates are widely used to reduce the computational cost of expensive objective functions, their combined impact on energy efficiency, algorithmic performance, and solution accuracy remains largely unquantified. We provide a critical investigation into this intersection, aiming to advance the design of energy-aware, surrogate-assisted search algorithms. Our experiments reveal substantial benefits: employing a state-of-the-art pre-trained surrogate can reduce energy consumption by up to 98\%, execution time by approximately 98%, and memory usage by around 99\%. Moreover, increasing the training dataset size further enhances these gains by lowering the per-use computational cost, while static pre-training versus continuous (iterative) retraining have relatively different advantages depending on whether we aim at time/energy or accuracy and general cost across problems, respectively. Surrogates also have a negative impact on costs and accuracy at times, and then they cannot be blindly adopted. These findings support a more holistic approach to surrogate-assisted optimization, integrating energy with time and predictive accuracy into performance assessments.
翻译:解决现实世界中的优化挑战不仅需要先进的元启发式算法,还需要对其内部机制进行持续改进。本文从一个新视角——能耗角度,探讨了以神经代理模型形式的机器学习与元启发式算法的融合。虽然代理模型被广泛用于降低昂贵目标函数的计算成本,但其对能源效率、算法性能和求解精度的综合影响在很大程度上仍未得到量化。我们对这一交叉领域进行了批判性研究,旨在推动能耗感知、代理辅助搜索算法的设计。我们的实验显示出显著的效益:采用最先进的预训练代理模型可将能耗降低高达98%,执行时间减少约98%,内存使用量降低约99%。此外,增加训练数据集规模可通过降低每次使用的计算成本来进一步提升这些收益,而静态预训练与持续(迭代式)再训练则分别具有相对不同的优势,具体取决于我们的目标是时间/能耗,还是精度及跨问题的综合成本。代理模型有时也会对成本和精度产生负面影响,因此不能盲目采用。这些发现支持一种更全面的代理辅助优化方法,将能耗、时间与预测精度共同纳入性能评估体系。