Recent advances, such as RegretNet, ALGnet, RegretFormer and CITransNet, use deep learning to approximate optimal multi item auctions by relaxing incentive compatibility (IC) and measuring its violation via ex post regret. However, the true accuracy of these regret estimates remains unclear. Computing exact regret is computationally intractable, and current models rely on gradient based optimizers whose outcomes depend heavily on hyperparameter choices. Through extensive experiments, we reveal that existing methods systematically underestimate actual regret (In some models, the true regret is several hundred times larger than the reported regret), leading to overstated claims of IC and revenue. To address this issue, we derive a lower bound on regret and introduce an efficient item wise regret approximation. Building on this, we propose a guided refinement procedure that substantially improves regret estimation accuracy while reducing computational cost. Our method provides a more reliable foundation for evaluating incentive compatibility in deep learning based auction mechanisms and highlights the need to reassess prior performance claims in this area.
翻译:近年来,诸如RegretNet、ALGnet、RegretFormer和CITransNet等研究通过放松激励相容性约束,并利用事后遗憾衡量其违反程度,采用深度学习方法来近似最优多物品拍卖机制。然而,这些遗憾估计的真实准确性仍不明确。计算精确遗憾在计算上是不可行的,现有模型依赖于基于梯度的优化器,其结果在很大程度上受超参数选择的影响。通过大量实验,我们发现现有方法系统性地低估了实际遗憾(在某些模型中,真实遗憾比报告遗憾高出数百倍),导致对激励相容性和收益的宣称被夸大。为解决这一问题,我们推导了遗憾的下界,并提出了一种高效的分物品遗憾近似方法。在此基础上,我们提出了一种引导式精炼流程,显著提高了遗憾估计的准确性,同时降低了计算成本。我们的方法为评估基于深度学习的拍卖机制中的激励相容性提供了更可靠的基础,并强调了重新评估该领域先前性能宣称的必要性。