The multi objective bandit setting has traditionally been regarded as more complex than the single objective case, as multiple objectives must be optimized simultaneously. In contrast to this prevailing view, we demonstrate that when multiple good arms exist for multiple objectives, they can induce a surprising benefit, implicit exploration. Under this condition, we show that simple algorithms that greedily select actions in most rounds can nonetheless achieve strong performance, both theoretically and empirically. To our knowledge, this is the first study to introduce implicit exploration in both multi objective and parametric bandit settings without any distributional assumptions on the contexts. We further introduce a framework for effective Pareto fairness, which provides a principled approach to rigorously analyzing fairness of multi objective bandit algorithms.
翻译:传统观点认为,多目标赌博机设定比单目标情形更为复杂,因为需要同时优化多个目标。与这一普遍看法相反,本文证明,当多个目标存在多支优质臂时,它们能带来一种出人意料的优势——隐式探索。在此条件下,我们证明,在多数轮次中贪婪选择动作的简单算法仍能在理论和实证上实现强劲性能。据我们所知,这是首个在多目标及参数化赌博机设定中引入隐式探索的研究,且无需对上下文做任何分布假设。我们进一步提出了一个有效的帕累托公平性框架,为严格分析多目标赌博机算法的公平性提供了原则性方法。