Maximizing submodular objectives under constraints is a fundamental problem in machine learning and optimization. We study the maximization of a nonnegative, non-monotone $γ$-weakly DR-submodular function over a down-closed convex body. Our main result is an approximation algorithm whose guarantee depends smoothly on $γ$; in particular, when $γ=1$ (the DR-submodular case) our bound recovers the $0.401$ approximation factor, while for $γ<1$ the guarantee degrades gracefully and, it improves upon previously reported bounds for $γ$-weakly DR-submodular maximization under the same constraints. Our approach combines a Frank-Wolfe-guided continuous-greedy framework with a $γ$-aware double-greedy step, yielding a simple yet effective procedure for handling non-monotonicity. This results in state-of-the-art guarantees for non-monotone $γ$-weakly DR-submodular maximization over down-closed convex bodies.
翻译:在约束条件下最大化子模目标是机器学习和优化中的一个基本问题。我们研究了在向下封闭凸体上最大化非负、非单调的γ-弱DR-子模函数。我们的主要结果是一个近似算法,其保证平滑地依赖于γ;特别地,当γ=1(即DR-子模情形)时,我们的界恢复了0.401的近似因子,而对于γ<1的情况,其保证会优雅地降低,并且改进了先前报道的在相同约束下γ-弱DR-子模最大化的界。我们的方法将Frank-Wolfe引导的连续贪婪框架与一个γ感知的双贪婪步骤相结合,产生了一个简单而有效的处理非单调性的过程。这为在向下封闭凸体上的非单调γ-弱DR-子模最大化带来了最先进的保证。