Large-scale sparse multi-objective optimization problems (LSMOPs) are prevalent in real-world applications, where optimal solutions typically contain only a few nonzero variables, such as in adversarial attacks, critical node detection, and sparse signal reconstruction. Since the function evaluation of LSMOPs often relies on large-scale datasets involving a large number of decision variables, the search space becomes extremely high-dimensional. The coexistence of sparsity and high dimensionality greatly intensifies the conflict between exploration and exploitation, making it difficult for existing multi-objective evolutionary algorithms (MOEAs) to identify the critical nonzero decision variables within limited function evaluations. To address this challenge, this paper proposes an evolutionary algorithm with probabilistic annealing for large-scale sparse multi-objective optimization. The algorithm is driven by two probability vectors with distinct entropy characteristics: a convergence-oriented probability vector with relatively low entropy ensures stable exploitation, whereas an annealed probability vector with gradually decreasing entropy enables an adaptive transition from global exploration to local refinement. By integrating these complementary search dynamics, the proposed algorithm achieves a dynamic equilibrium between exploration and exploitation. Experimental results on benchmark problems and real-world applications demonstrate that the proposed algorithm outperforms state-of-the-art evolutionary algorithms in terms of both convergence and diversity.
翻译:大规模稀疏多目标优化问题在现实应用中普遍存在,其最优解通常仅包含少数非零变量,例如在对抗攻击、关键节点检测和稀疏信号重建等场景中。由于此类问题的函数评估常依赖于涉及大量决策变量的大规模数据集,搜索空间变得极高维。稀疏性与高维度的并存极大地加剧了探索与利用之间的冲突,使得现有多目标进化算法在有限函数评估次数内难以识别关键的非零决策变量。为应对这一挑战,本文提出一种用于大规模稀疏多目标优化的概率退火进化算法。该算法由两个具有不同熵特性的概率向量驱动:一个熵值相对较低、面向收敛的概率向量确保稳定的利用,而一个熵值逐渐衰减的退火概率向量则实现从全局探索到局部细化的自适应过渡。通过整合这两种互补的搜索动态,所提算法实现了探索与利用之间的动态平衡。在基准问题和实际应用上的实验结果表明,所提算法在收敛性和多样性方面均优于当前最先进的进化算法。