Strategic decision-making in multi-agent settings is a key challenge for large language models (LLMs), particularly when coordination and negotiation must unfold over extended conversations. While recent work has explored the use of LLMs in isolated decision tasks, little attention has been given to optimizing long-term objectives through dialogue. We introduce \textbf{GameTalk}, a framework for training LLMs to make strategic decisions via multi-turn interactions. Unlike prior work that focuses on single-turn objectives or static action prediction, we train LLMs to optimize a global objective across full conversations. We achieve this by adapting fine-tuning methods like GRPO, DPO, and STaR to incorporate reward signals that depend on the entire interaction. We evaluate this approach on a suite of increasingly complex games, designed to stress different aspects of reasoning, coordination, and opponent modeling. Our results show that GameTalk significantly outperforms untrained models, especially under reward shaping, with DPO consistently yielding the strongest gains. These findings position conversational fine-tuning as a promising path for LLMs to reason, negotiate, and act in interactive environments.
翻译:在多智能体环境中进行策略决策是大语言模型(LLMs)面临的一个关键挑战,尤其是在需要通过多轮对话展开协调与谈判的场景中。尽管近期研究探索了LLMs在孤立决策任务中的应用,但如何通过对话优化长期目标却鲜有研究关注。本文提出\textbf{GameTalk},一个通过多轮交互训练LLMs进行策略决策的框架。与先前聚焦单轮目标或静态动作预测的研究不同,我们训练LLMs在全对话过程中优化全局目标。通过改进GRPO、DPO和STaR等微调方法,使其能够融合依赖于完整交互过程的奖励信号,我们实现了这一目标。我们在系列复杂度递增的博弈环境中评估该方法,这些环境专门设计用于检验推理、协调和对手建模等不同维度的能力。实验结果表明,GameTalk显著优于未经训练的模型,在奖励塑形条件下表现尤为突出,其中DPO方法持续带来最强的性能提升。这些发现表明,对话式微调是使LLMs在交互环境中进行推理、谈判与行动的一条可行路径。