We propose a novel LLM-based framework for reasoning in discrete, game-theoretic tasks, illustrated with \emph{Tic-Tac-Toe}. The method integrates in-context learning with entropy-guided chain-of-thought (CoT) reasoning and adaptive context retrieval. The model dynamically adjusts both the number of retrieved examples and reasoning paths according to token-level uncertainty: concise reasoning with minimal context is used when uncertainty is low, whereas higher uncertainty triggers expanded multi-path CoT exploration. Experimental evaluation against a sub-optimal algorithmic opponent shows that entropy-aware adaptive reasoning substantially improves decision quality, increasing the average game outcome from \(-11.6\%\) with the baseline LLM to \(+9.5\%\) with entropy-guided adaptive reasoning over 100 games (win = +1, tie = 0, loss = -1), while maintaining a relatively low number of LLM queries per game. Statistical validation confirms that the improvement is significant, and correlation analysis reveals a negative association between token-level entropy and move optimality. These findings demonstrate that uncertainty-guided adaptive reasoning effectively enhances LLM performance in sequential decision-making environments.
翻译:本文提出了一种新颖的基于大语言模型(LLM)的框架,用于处理离散博弈论任务中的推理问题,并以**井字棋**为例进行说明。该方法将上下文学习与基于熵引导的思维链推理及自适应上下文检索相结合。模型根据令牌级别的预测不确定性,动态调整检索示例的数量和推理路径:当不确定性较低时,采用简洁推理和最少上下文;而当不确定性较高时,则触发扩展的多路径思维链探索。针对一个次优算法对手的实验评估表明,基于熵感知的自适应推理显著提升了决策质量,在100场游戏(胜=+1,平=0,负=-1)中,平均游戏结果从基线LLM的-11.6%提升至熵引导自适应推理的+9.5%,同时保持了相对较低的单局LLM查询次数。统计验证证实该改进具有显著性,相关性分析揭示了令牌级别熵与走子最优性之间存在负相关关系。这些发现表明,基于不确定性的自适应推理能有效提升LLM在序列决策环境中的性能。