Evolutionary algorithms have been successful in solving multi-objective optimization problems (MOPs). However, as a class of population-based search methodology, evolutionary algorithms require a large number of evaluations of the objective functions, preventing them from being applied to a wide range of expensive MOPs. To tackle the above challenge, this work proposes for the first time a diffusion model that can learn to perform evolutionary multi-objective search, called EmoDM. This is achieved by treating the reversed convergence process of evolutionary search as the forward diffusion and learn the noise distributions from previously solved evolutionary optimization tasks. The pre-trained EmoDM can then generate a set of non-dominated solutions for a new MOP by means of its reverse diffusion without further evolutionary search, thereby significantly reducing the required function evaluations. To enhance the scalability of EmoDM, a mutual entropy-based attention mechanism is introduced to capture the decision variables that are most important for the objectives. Experimental results demonstrate the competitiveness of EmoDM in terms of both the search performance and computational efficiency compared with state-of-the-art evolutionary algorithms in solving MOPs having up to 5000 decision variables. The pre-trained EmoDM is shown to generalize well to unseen problems, revealing its strong potential as a general and efficient MOP solver.
翻译:进化算法在求解多目标优化问题方面已取得成功。然而,作为一类基于种群的搜索方法,进化算法需要对目标函数进行大量评估,这限制了其在广泛的高成本多目标优化问题中的应用。为应对上述挑战,本研究首次提出一种能够学习执行进化多目标搜索的扩散模型,称为EmoDM。该模型将进化搜索的逆向收敛过程视为前向扩散过程,并从已解决的进化优化任务中学习噪声分布。预训练的EmoDM随后可通过其逆向扩散过程为新多目标优化问题生成一组非支配解,而无需进行额外的进化搜索,从而显著减少所需的目标函数评估次数。为增强EmoDM的可扩展性,本研究引入基于互熵的注意力机制以捕获对目标函数最为关键的决策变量。实验结果表明,在求解决策变量高达5000个的多目标优化问题时,EmoDM在搜索性能和计算效率方面均展现出与先进进化算法相当的竞争力。预训练的EmoDM对未见问题表现出良好的泛化能力,揭示了其作为通用高效多目标优化求解器的强大潜力。