Designing strategyproof mechanisms for multi-facility location that optimize social costs based on agent preferences had been challenging due to the extensive domain knowledge required and poor worst-case guarantees. Recently, deep learning models have been proposed as alternatives. However, these models require some domain knowledge and extensive hyperparameter tuning as well as lacking interpretability, which is crucial in practice when transparency of the learned mechanisms is mandatory. In this paper, we introduce a novel approach, named LLMMech, that addresses these limitations by incorporating large language models (LLMs) into an evolutionary framework for generating interpretable, hyperparameter-free, empirically strategyproof, and nearly optimal mechanisms. Our experimental results, evaluated on various problem settings where the social cost is arbitrarily weighted across agents and the agent preferences may not be uniformly distributed, demonstrate that the LLM-generated mechanisms generally outperform existing handcrafted baselines and deep learning models. Furthermore, the mechanisms exhibit impressive generalizability to out-of-distribution agent preferences and to larger instances with more agents.
翻译:设计针对多设施选址的策略证明机制,以基于代理偏好优化社会成本,长期以来一直具有挑战性,这主要源于所需的大量领域知识以及较差的最坏情况保证。最近,深度学习模型被提出作为替代方案。然而,这些模型仍需要一定的领域知识和大量的超参数调优,并且缺乏可解释性,而在实践中,当学习到的机制必须具有透明度时,可解释性至关重要。在本文中,我们提出了一种名为LLMMech的新方法,该方法通过将大语言模型融入一个进化框架,来生成可解释、无需超参数调优、经验上具有策略证明性且近乎最优的机制,从而解决了这些局限性。我们在各种问题设置下评估了实验结果,其中社会成本在代理之间任意加权,且代理偏好可能非均匀分布。实验结果表明,LLM生成的机制通常优于现有的人工设计基线和深度学习模型。此外,这些机制在分布外代理偏好以及具有更多代理的更大规模实例上,都展现出令人印象深刻的泛化能力。