The competitive auction was first proposed by Goldberg, Hartline, and Wright. In their paper, they introduce the competitive analysis framework of online algorithm designing into the traditional revenue-maximizing auction design problem. While the competitive analysis framework only cares about the worst-case bound, a growing body of work in the online algorithm community studies the learning-augmented framework. In this framework, designers are allowed to leverage imperfect machine-learned predictions of unknown information and pursue better theoretical guarantees when the prediction is accurate(consistency). Meanwhile, designers also need to maintain a nearly-optimal worst-case ratio(robustness). In this work, we revisit the competitive auctions in the learning-augmented setting. We leverage the imperfect predictions of the private value of the bidders and design the learning-augmented mechanisms for several competitive auctions with different constraints, including digital good auctions, limited-supply auctions, and general downward-closed permutation environments. For all these auction environments, our mechanisms enjoy $1$-consistency against the strongest benchmark $OPT$, which is impossible to achieve $O(1)$-competitive without predictions. At the same time, our mechanisms also maintain the $O(1)$-robustness against all benchmarks considered in the traditional competitive analysis. Considering the possible inaccuracy of the predictions, we provide a reduction that transforms our learning-augmented mechanisms into an error-tolerant version, which enables the learning-augmented mechanism to ensure satisfactory revenue in scenarios where the prediction error is moderate.
翻译:竞争性拍卖最初由Goldberg、Hartline和Wright提出。在他们的论文中,他们将在线算法设计的竞争分析框架引入传统的收益最大化拍卖设计问题。虽然竞争分析框架只关注最坏情况下的性能界限,但在线算法领域越来越多的研究开始关注学习增强框架。在此框架下,设计者可以利用机器学习对未知信息的不完美预测,并在预测准确时追求更好的理论保证(一致性)。同时,设计者仍需保持近乎最优的最坏情况竞争比(鲁棒性)。在本工作中,我们重新审视学习增强设定下的竞争性拍卖。我们利用对投标人私有价值的不完美预测,为多种不同约束条件下的竞争性拍卖设计了学习增强机制,包括数字商品拍卖、有限供给拍卖以及一般向下封闭置换环境。对于所有这些拍卖环境,我们的机制均能针对最强基准OPT实现1-一致性,而这在无预测的情况下无法达到O(1)-竞争性。同时,我们的机制对传统竞争分析中考虑的所有基准仍保持O(1)-鲁棒性。考虑到预测可能存在的不准确性,我们提出了一种归约方法,将我们的学习增强机制转化为容错版本,使学习增强机制能够在预测误差适中的场景下确保令人满意的收益。