Iterative human engagement is a common and effective means of leveraging the advanced language processing power of large language models (LLMs). Using well-structured prompts in a conversational manner, human users can effectively influence an LLM to develop more thoughtful and accurate responses. Motivated by this insight, we propose the Iteration of Thought (IoT) framework for enhancing LLM responses by generating "thought"-provoking prompts vis a vis an input query and the current iteration of an LLM's response. Unlike static or semi-static approaches, e.g. Chain of Thought (CoT) or Tree of Thoughts (ToT), IoT adapts its reasoning path dynamically, based on evolving context, and without generating alternate explorative thoughts which are ultimately discarded. The three components of the IoT framework are (1) an Inner Dialogue Agent (IDA) responsible for generating instructive, context-specific prompts; (2) an LLM Agent (LLMA) that processes these prompts to refine its responses; and (3) an iterative prompting loop that implements a conversation between the former two components. We introduce two variants of our framework: Autonomous Iteration of Thought (AIoT), where an LLM decides when to stop iterating, and Guided Iteration of Thought (GIoT), which always forces a fixed number iterations. We investigate the performance of IoT across various datasets, spanning complex reasoning tasks from the GPQA dataset, explorative problem-solving in Game of 24, puzzle solving in Mini Crosswords, and multi-hop question answering from the HotpotQA dataset. Our results show that IoT represents a viable paradigm for autonomous response refinement in LLMs, showcasing significant improvements over CoT and thereby enabling more adaptive and efficient reasoning systems that minimize human intervention.
翻译:迭代式的人类交互是利用大型语言模型(LLMs)强大语言处理能力的常见且有效手段。通过以对话方式使用结构良好的提示,人类用户可以有效地引导LLM产生更深入、更准确的回答。受此启发,我们提出了思维迭代(IoT)框架,通过针对输入查询和LLM当前迭代的响应生成具有“启发性”的提示,以增强LLM的回答能力。与静态或半静态方法(例如思维链(CoT)或思维树(ToT))不同,IoT能够根据不断演化的上下文动态调整其推理路径,且无需生成最终被丢弃的替代性探索思路。IoT框架包含三个组成部分:(1)内部对话代理(IDA),负责生成具有指导性且与上下文相关的提示;(2)LLM代理(LLMA),负责处理这些提示以优化其回答;(3)实现前两个组件之间对话的迭代提示循环。我们提出了该框架的两种变体:自主思维迭代(AIoT),由LLM自行决定何时停止迭代;以及引导式思维迭代(GIoT),始终强制执行固定次数的迭代。我们在多个数据集上评估了IoT的性能,涵盖GPQA数据集中的复杂推理任务、24点游戏中的探索性问题求解、迷你填字游戏中的谜题解决,以及HotpotQA数据集中的多跳问答任务。实验结果表明,IoT是LLM自主响应优化的一种可行范式,相比CoT展现出显著提升,从而能够实现更自适应、更高效的推理系统,并最大限度地减少人工干预。