Dynamic mechanism design studies how mechanism designers should allocate resources among agents in a time-varying environment. We consider the problem where the agents interact with the mechanism designer according to an unknown Markov Decision Process (MDP), where agent rewards and the mechanism designer's state evolve according to an episodic MDP with unknown reward functions and transition kernels. We focus on the online setting with linear function approximation and propose novel learning algorithms to recover the dynamic Vickrey-Clarke-Grove (VCG) mechanism over multiple rounds of interaction. A key contribution of our approach is incorporating reward-free online Reinforcement Learning (RL) to aid exploration over a rich policy space to estimate prices in the dynamic VCG mechanism. We show that the regret of our proposed method is upper bounded by $\tilde{\mathcal{O}}(T^{2/3})$ and further devise a lower bound to show that our algorithm is efficient, incurring the same $\Omega(T^{2 / 3})$ regret as the lower bound, where $T$ is the total number of rounds. Our work establishes the regret guarantee for online RL in solving dynamic mechanism design problems without prior knowledge of the underlying model.
翻译:动态机制设计研究机制设计者如何在时变环境中在智能体之间分配资源。我们考虑智能体根据未知的马尔可夫决策过程(MDP)与机制设计者交互的问题,其中智能体奖励和机制设计者的状态根据奖励函数和转移核未知的情景式MDP演化。我们聚焦于具有线性函数逼近的在线设置,并提出了新颖的学习算法,以在多次交互轮次中恢复动态Vickrey-Clarke-Grove(VCG)机制。我们方法的一个关键贡献是结合了无奖励在线强化学习(RL),以在丰富的策略空间中进行探索,从而估计动态VCG机制中的价格。我们证明了所提方法的遗憾上界为$\tilde{\mathcal{O}}(T^{2/3})$,并进一步设计了一个下界以表明我们的算法是高效的,其遗憾与下界相同,均为$\Omega(T^{2 / 3})$,其中$T$是总轮次数。我们的工作为在线RL在无需先验知识的情况下解决动态机制设计问题建立了遗憾保证。