We examine the complexity of computing welfare- and revenue-maximizing equilibria in autobidding second-price auctions subject to return-on-spend (RoS) constraints. We show that computing an autobidding equilibrium that approximates the welfare-optimal one within a factor of $2 - ε$ is NP-hard for any constant $ε> 0$. Moreover, deciding whether there exists an autobidding equilibrium that attains a $1/2 + ε$ fraction of the optimal welfare -- unfettered by equilibrium constraints -- is NP-hard for any constant $ε> 0$. This hardness result is tight in view of the fact that the price of anarchy (PoA) is at most $2$, and shows that deciding whether a non-trivial autobidding equilibrium exists -- one that is even marginally better than the worst-case guarantee -- is intractable. For revenue, we establish a stronger logarithmic inapproximability, while under the projection games conjecture, our reduction rules out even a polynomial approximation factor. These results significantly strengthen the APX-hardness of Li and Tang (AAAI '24). Furthermore, we refine our reduction in the presence of ML advice concerning the buyers' valuations, revealing again a close connection between the inapproximability threshold and PoA bounds. Finally, we examine relaxed notions of equilibrium attained by simple learning algorithms, establishing constant inapproximability for both revenue and welfare.
翻译:本文研究了在投资回报率约束下,自动竞价第二价格拍卖中福利与收益最大化均衡的计算复杂度。我们证明,对于任意常数ε>0,计算一个能在2-ε因子内逼近福利最优解的自动竞价均衡是NP难的。此外,判定是否存在一个不受均衡约束限制、能达到最优福利1/2+ε比例的自动竞价均衡,对于任意常数ε>0同样是NP难的。这一硬度结果在无政府代价上界为2的前提下是紧的,表明判定是否存在一个非平凡的自动竞价均衡——即其表现仅略优于最坏情况保证——是不可判定的。在收益最大化方面,我们建立了更强的对数不可逼近性结果,而在投影博弈猜想下,我们的归约甚至排除了多项式近似因子的可能性。这些结果显著强化了Li和Tang(AAAI '24)提出的APX难度结论。进一步地,在引入关于买方估值的机器学习预测建议后,我们改进了归约方法,再次揭示了不可逼近性阈值与无政府代价上界之间的紧密联系。最后,我们考察了由简单学习算法实现的松弛均衡概念,证明了在收益和福利两方面均存在常数因子的不可逼近性。