We investigate the continuous non-monotone DR-submodular maximization problem subject to a down-closed convex solvable constraint. Our first contribution is to construct an example to demonstrate that (first-order) stationary points can have arbitrarily bad approximation ratios, and they are usually on the boundary of the feasible domain. These findings are in contrast with the monotone case where any stationary point yields a $1/2$-approximation (Hassani et al. (2017)). Moreover, this example offers insights on how to design improved algorithms by avoiding bad stationary points, such as the restricted continuous local search algorithm (Chekuri et al. (2014)) and the aided measured continuous greedy (Buchbinder and Feldman (2019)). However, the analyses in the last two algorithms only work for the discrete domain because both need to invoke the inequality that the multilinear extension of any submodular set function is bounded from below by its Lovasz extension. Our second contribution, therefore, is to remove this restriction and show that both algorithms can be extended to the continuous domain while retaining the same approximation ratios, and hence offering improved approximation ratios over those in Bian et al. (2017a). for the same problem. At last, we also include numerical experiments to demonstrate our algorithms on problems arising from machine learning and artificial intelligence.
翻译:本文研究受下闭凸可解约束的连续非单调DR子模最大化问题。我们的第一个贡献是构造了一个示例,证明(一阶)驻点可能具有任意差的近似比,且通常位于可行域的边界上。这一发现与单调情况形成对比——在单调情况下,任意驻点都能达到1/2近似比(Hassani等人,2017)。此外,该示例提供了如何通过避免不良驻点来设计改进算法的见解,例如受限连续局部搜索算法(Chekuri等人,2014)和辅助测量连续贪婪算法(Buchbinder和Feldman,2019)。然而,后两种算法的分析仅适用于离散域,因为两者都需要调用一个不等式:任何子模集合函数的多线性扩展的下界由其Lovász扩展给出。因此,我们的第二个贡献是消除这一限制,证明这两种算法可扩展到连续域,同时保持相同的近似比,从而针对同一问题提供比Bian等人(2017a)更优的近似比。最后,我们还包含了数值实验,以展示我们的算法在机器学习和人工智能领域问题上的应用。