We consider the problem of learning and using predictions for warm start algorithms with predictions. In this setting, an algorithm is given an instance of a problem, and a prediction of the solution. The runtime of the algorithm is bounded by the distance from the predicted solution to the true solution of the instance. Previous work has shown that when instances are drawn iid from some distribution, it is possible to learn an approximately optimal fixed prediction (Dinitz et al, NeurIPS 2021), and in the adversarial online case, it is possible to compete with the best fixed prediction in hindsight (Khodak et al, NeurIPS 2022). In this work we give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions $\mathbf{P}$. That is, the "optimal offline cost" to solve an instance with respect to $\mathbf{P}$ is the distance from the true solution to the closest member of $\mathbf{P}$. This is analogous to the $k$-medians objective function. In the distributional setting, we show a simple strategy that incurs cost that is at most an $O(k)$ factor worse than the optimal offline cost. We then show a way to leverage learnable coarse information, in the form of partitions of the instance space into groups of "similar" instances, that allows us to potentially avoid this $O(k)$ factor. Finally, we consider an online version of the problem, where we compete against offline strategies that are allowed to maintain a moving set of $k$ predictions or "trajectories," and are charged for how much the predictions move. We give an algorithm that does at most $O(k^4 \ln^2 k)$ times as much work as any offline strategy of $k$ trajectories. This algorithm is deterministic (robust to an adaptive adversary), and oblivious to the setting of $k$. Thus the guarantee holds for all $k$ simultaneously.
翻译:我们研究在热启动算法中学习并使用预测的问题。在此设定下,给定一个问题的实例以及对该实例解的预测,算法的运行时间受限于预测解与真实解之间的距离。已有研究表明,当实例独立同分布于某个分布时,可以学习到一个近似最优的固定预测(Dinitz等人,NeurIPS 2021);而在对抗性在线情形下,可以与时后最优固定预测竞争(Khodak等人,NeurIPS 2022)。本文针对一组包含$k$个预测$\mathbf{P}$的更强基准给出竞争性保证。即,求解实例时相对于$\mathbf{P}$的"最优离线代价"定义为真实解到$\mathbf{P}$中最近预测的距离,这与$k$-中位数目标函数类似。在分布设定下,我们提出一种简单策略,其产生的代价最多为最优离线代价的$O(k)$倍。随后我们展示如何利用可学习的粗粒度信息(以实例空间划分为"相似"实例群组的形式),可能避免这一$O(k)$倍因子。最后,我们考虑问题的在线版本,与允许维护包含$k$个动态预测(或"轨迹")并需支付预测移动代价的离线策略进行竞争。我们提出一种算法,其工作量不超过任何采用$k$条轨迹的离线策略的$O(k^4 \ln^2 k)$倍。该算法是确定性的(对自适应对手具有鲁棒性),且无需知晓$k$的具体设定。因此该保证对所有$k$同时成立。