Addressing real-world optimization challenges requires not only advanced metaheuristics but also continuous refinement of their internal mechanisms. This paper explores the integration of machine learning in the form of neural surrogate models into metaheuristics through a recent lens: energy consumption. While surrogates are widely used to reduce the computational cost of expensive objective functions, their combined impact on energy efficiency, algorithmic performance, and solution accuracy remains largely unquantified. We provide a critical investigation into this intersection, aiming to advance the design of energy-aware, surrogate-assisted search algorithms. Our experiments reveal substantial benefits: employing a state-of-the-art pre-trained surrogate can reduce energy consumption by up to 98\%, execution time by approximately 98%, and memory usage by around 99\%. Moreover, increasing the training dataset size further enhances these gains by lowering the per-use computational cost, while static pre-training versus continuous (iterative) retraining have relatively different advantages depending on whether we aim at time/energy or accuracy and general cost across problems, respectively. Surrogates also have a negative impact on costs and accuracy at times, and then they cannot be blindly adopted. These findings support a more holistic approach to surrogate-assisted optimization, integrating energy with time and predictive accuracy into performance assessments.
翻译:解决现实世界中的优化挑战不仅需要先进的元启发式算法,还需要对其内部机制进行持续改进。本文从能源消耗这一新兴视角,探讨了以神经代理模型形式存在的机器学习如何融入元启发式算法。虽然代理模型被广泛用于降低昂贵目标函数的计算成本,但其对能源效率、算法性能和求解精度的综合影响在很大程度上仍未得到量化。我们对这一交叉领域进行了批判性研究,旨在推动能源感知的代理辅助搜索算法的设计。我们的实验显示出显著优势:采用最先进的预训练代理模型可降低高达98%的能源消耗,减少约98%的执行时间,并降低约99%的内存使用量。此外,增加训练数据集规模可通过降低单次使用的计算成本进一步提升这些收益,而静态预训练与连续(迭代式)再训练则分别针对时间/能源目标与精度及跨问题综合成本具有相对不同的优势。代理模型有时也会对成本和精度产生负面影响,因此不能盲目采用。这些发现支持了一种更全面的代理辅助优化方法,将能源、时间和预测精度共同纳入性能评估体系。