Algorithms with predictions is a recent framework for decision-making under uncertainty that leverages the power of machine-learned predictions without making any assumption about their quality. The goal in this framework is for algorithms to achieve an improved performance when the predictions are accurate while maintaining acceptable guarantees when the predictions are erroneous. A serious concern with algorithms that use predictions is that these predictions can be biased and, as a result, cause the algorithm to make decisions that are deemed unfair. We show that this concern manifests itself in the classical secretary problem in the learning-augmented setting -- the state-of-the-art algorithm can have zero probability of accepting the best candidate, which we deem unfair, despite promising to accept a candidate whose expected value is at least $\max\{\Omega (1) , 1 - O(\epsilon)\}$ times the optimal value, where $\epsilon$ is the prediction error. We show how to preserve this promise while also guaranteeing to accept the best candidate with probability $\Omega(1)$. Our algorithm and analysis are based on a new "pegging" idea that diverges from existing works and simplifies/unifies some of their results. Finally, we extend to the $k$-secretary problem and complement our theoretical analysis with experiments.
翻译:预测算法是近期提出的一种在不确定性下进行决策的框架,它利用机器学习预测的能力,而不对预测质量做任何假设。该框架的目标是使算法在预测准确时获得改进的性能,同时在预测错误时保持可接受的性能保证。使用预测算法的一个严重关切是,这些预测可能存在偏差,从而导致算法做出被认为不公平的决策。我们证明,这种关切在学习增强环境下的经典秘书问题中确实存在——即使当前最先进的算法承诺接受一个期望值至少为最优值 $\max\{\Omega (1) , 1 - O(\epsilon)\}$ 倍的候选人(其中 $\epsilon$ 为预测误差),该算法仍可能以零概率接受最佳候选人,我们认为这是不公平的。我们展示了如何在保持这一承诺的同时,还能以 $\Omega(1)$ 的概率保证接受最佳候选人。我们的算法与分析基于一种新的"锚定"思想,该思想与现有工作不同,并简化/统一了它们的一些结果。最后,我们将方法扩展到 $k$-秘书问题,并通过实验补充了理论分析。