We study a natural combinatorial single-principal multi-agent contract design problem, in which a principal motivates a team of agents to exert effort toward a given task. At the heart of our model is a reward function, which maps the agent efforts to an expected reward of the principal. We seek to design computationally efficient algorithms for finding optimal (or near-optimal) linear contracts for reward functions that belong to the complement-free hierarchy. Our first main result gives constant-factor approximation algorithms for submodular and XOS reward functions, with value oracles for submodular reward functions and value and demand oracles for XOS reward functions. It relies on an unconventional use of ``prices'' and (approximate) demand queries for selecting the set of agents that the principal should contract with, and exploits a novel scaling property of XOS functions and their marginals, which may be of independent interest. As our second main result, we show that constant approximation is the best we can get for submodular reward functions, even with both value and demand oracles. For the larger class of subadditive reward functions, we establish an $Ω(\sqrt{n})$ impossibility for settings with $n$ agents. A striking feature of this impossibility is that it applies to subadditive functions that are constant-factor close to submodular. This rapid degradation presents a surprising departure from previous literature, e.g., on combinatorial auctions, where approximation guarantees tend to deteriorate more
翻译:我们研究一个自然的组合式单委托人多智能体合约设计问题,其中委托人激励一组智能体为给定任务付出努力。我们模型的核心是奖励函数,该函数将智能体的努力映射为委托人的期望收益。针对属于无补层次结构的奖励函数,我们致力于设计计算高效的算法以寻找最优(或近似最优)线性合约。我们的第一个主要结果为子模函数和XOS奖励函数提供了常数因子近似算法,其中子模奖励函数使用值预言机,XOS奖励函数使用值和需求预言机。该结果依赖于非常规地使用"价格"和(近似)需求查询来选择委托人应签约的智能体集合,并利用了XOS函数及其边际的新颖缩放特性,这一特性可能具有独立的研究价值。作为第二个主要结果,我们证明即使同时拥有值和需求预言机,常数近似已是子模奖励函数所能达到的最佳界限。对于更广泛的次可加奖励函数类,我们在包含$n$个智能体的设定中建立了$Ω(\sqrt{n})$的不可近似性。这一不可能性的显著特征在于,它适用于与子模函数仅相差常数因子的次可加函数。这种急剧的性能退化呈现出与先前文献(如组合拍卖研究)的显著差异,后者的近似保证往往以更渐进的方式恶化。