This paper introduces the notion of upper-linearizable/quadratizable functions, a class that extends concavity and DR-submodularity in various settings, including monotone and non-monotone cases over different convex sets. A general meta-algorithm is devised to convert algorithms for linear/quadratic maximization into ones that optimize upper-linearizable/quadratizable functions, offering a unified approach to tackling concave and DR-submodular optimization problems. The paper extends these results to multiple feedback settings, facilitating conversions between semi-bandit/first-order feedback and bandit/zeroth-order feedback, as well as between first/zeroth-order feedback and semi-bandit/bandit feedback. Leveraging this framework, new algorithms are derived using existing results as base algorithms for convex optimization, improving upon state-of-the-art results in various cases. Dynamic and adaptive regret guarantees are obtained for DR-submodular maximization, marking the first algorithms to achieve such guarantees in these settings. Notably, the paper achieves these advancements with fewer assumptions compared to existing state-of-the-art results, underscoring its broad applicability and theoretical contributions to non-convex optimization.
翻译:本文提出了上可线性化/可二次化函数的概念,此类函数在不同凸集上的单调与非单调情形中扩展了凹性与DR-子模性。我们设计了一种通用元算法,可将线性/二次最大化算法转化为优化上可线性化/可二次化函数的算法,从而为处理凹函数与DR-子模优化问题提供了统一框架。本文进一步将相关结果推广至多反馈场景,实现了半赌博/一阶反馈与赌博/零阶反馈之间,以及一阶/零阶反馈与半赌博/赌博反馈之间的算法转换。基于此框架,我们以现有凸优化算法为基础推导出新型算法,在多种情况下改进了当前最优结果。针对DR-子模最大化问题,我们首次在该设定下获得了动态与自适应遗憾保证。特别值得注意的是,本文在比现有最优结果更弱的假设条件下实现了这些突破,彰显了其在非凸优化领域的广泛适用性与理论贡献。