Variants of the GSEMO algorithm using multi-objective formulations have been successfully analyzed and applied to optimize chance-constrained submodular functions. However, due to the effect of the increasing population size of the GSEMO algorithm considered in these studies from the algorithms, the approach becomes ineffective if the number of trade-offs obtained grows quickly during the optimization run. In this paper, we apply the sliding-selection approach introduced in [21] to the optimization of chance-constrained monotone submodular functions. We theoretically analyze the resulting SW-GSEMO algorithm which successfully limits the population size as a key factor that impacts the runtime and show that this allows it to obtain better runtime guarantees than the best ones currently known for the GSEMO. In our experimental study, we compare the performance of the SW-GSEMO to the GSEMO and NSGA-II on the maximum coverage problem under the chance constraint and show that the SW-GSEMO outperforms the other two approaches in most cases. In order to get additional insights into the optimization behavior of SW-GSEMO, we visualize the selection behavior of SW-GSEMO during its optimization process and show it beats other algorithms to obtain the highest quality of solution in variable instances.
翻译:采用多目标公式的GSEMO算法变体已成功分析并应用于机会约束子模函数的优化。然而,由于这些研究中考虑的GSEMO算法种群规模增长效应,若优化过程中获得的权衡解数量快速增长,该方法将失效。本文将在文献[21]中引入的滑动选择方法应用于机会约束单调子模函数的优化。我们理论分析了由此产生的SW-GSEMO算法——该算法成功限制了作为影响运行时间关键因素的种群规模,并证明其能获得优于当前GSEMO算法最佳已知结果的运行时间保证。在实验研究中,我们比较了SW-GSEMO与GSEMO及NSGA-II在机会约束下最大覆盖问题上的性能,结果表明SW-GSEMO在多数情况下优于另外两种方法。为深入理解SW-GSEMO的优化行为,我们可视化展示了其优化过程中的选择行为,证明该算法能在可变实例中击败其他算法以获得最高质量的解。