The Counterfactual Regret Minimization (CFR) algorithm and its variants have enabled the development of pokerbots capable of beating the best human players in heads-up (1v1) cash games and competing with them in six-player formats. However, CFR's computational complexity rises exponentially with the number of players. Furthermore, in games with three or more players, following Nash equilibrium no longer guarantees a non-losing outcome. These limitations, along with others, significantly restrict the applicability of CFR to the most popular formats: tournaments. Motivated by the recent success of Large Language Models (LLM) in chess and Diplomacy, we present SpinGPT, the first LLM tailored to Spin & Go, a popular three-player online poker format. SpinGPT is trained in two stages: (1) Supervised Fine-Tuning on 320k high-stakes expert decisions; (2) Reinforcement Learning on 270k solver-generated hands. Our results show that SpinGPT matches the solver's actions in 78% of decisions (tolerant accuracy). With a simple deep-stack heuristic, it achieves 13.4 +/- 12.9 BB/100 versus Slumbot in heads-up over 30,000 hands (95% CI). These results suggest that LLMs could be a new way to deal with multi-player imperfect-information games like poker.
翻译:反事实遗憾最小化(Counterfactual Regret Minimization, CFR)算法及其变体已推动开发出能够在单挑(1v1)现金游戏中击败顶尖人类玩家、并在六人局模式中与之抗衡的扑克机器人。然而,CFR的计算复杂度随玩家数量呈指数级增长。此外,在三人或更多玩家的游戏中,遵循纳什均衡不再保证非负收益。这些局限与其他因素共同显著限制了CFR在最流行赛制——锦标赛中的应用。受大语言模型(LLM)近期在国际象棋和《外交》游戏中成功的启发,我们提出了SpinGPT,这是首个针对流行三人线上扑克模式"Spin & Go"定制的大语言模型。SpinGPT的训练分为两个阶段:(1)基于32万手高额注专家决策进行监督微调;(2)基于27万手求解器生成牌局进行强化学习。实验结果表明,SpinGPT在78%的决策中与求解器行动一致(容错准确率)。通过简单的深筹码启发式策略,其在3万手牌中对战Slumbot时获得13.4 ± 12.9 BB/100的胜率(95%置信区间)。这些结果表明,大语言模型可能成为处理扑克等多玩家非完全信息博弈的新途径。