Randomized search heuristics (RSHs) are known to have a certain robustness to noise. Mathematical analyses trying to quantify rigorously how robust RSHs are to a noisy access to the objective function typically assume that each solution is re-evaluated whenever it is compared to others. This aims at preventing that a single noisy evaluation has a lasting negative effect, but is computationally expensive and requires the user to foresee that noise is present (as in a noise-free setting, one would never re-evaluate solutions). In this work, we conduct the first mathematical runtime analysis of an evolutionary algorithm solving a single-objective noisy problem without re-evaluations. We prove that the $(1+1)$ evolutionary algorithm without re-evaluations can optimize the classic LeadingOnes benchmark with up to constant noise rates, in sharp contrast to the version with re-evaluations, where only noise with rates $O(n^{-2} \log n)$ can be tolerated. This result suggests that re-evaluations are much less needed than what was previously thought, and that they actually can be highly detrimental. The insights from our mathematical proofs indicate that this similar results are plausible for other classic benchmarks.
翻译:随机搜索启发式算法(RSHs)已知对噪声具有一定的鲁棒性。旨在严格量化RSHs对目标函数噪声访问鲁棒性的数学分析通常假设:每当一个解与其他解进行比较时,都会对其进行重新评估。这旨在防止单次噪声评估产生持久的负面影响,但计算成本高昂,且需要用户预见到噪声的存在(因为在无噪声环境中,人们永远不会重新评估解)。在本工作中,我们首次对不进行重新评估的进化算法求解单目标噪声问题进行了数学运行时间分析。我们证明,不进行重新评估的$(1+1)$进化算法能够在高达常数噪声率的情况下优化经典的LeadingOnes基准问题,这与进行重新评估的版本形成鲜明对比,后者仅能容忍$O(n^{-2} \log n)$量级的噪声率。这一结果表明,重新评估的需求远低于先前的认知,并且实际上可能极为不利。我们数学证明的启示表明,对于其他经典基准问题,类似的结果也是可能成立的。