Large language models (LLMs) have shown limitations in tasks requiring complex logical reasoning and multi-step problem-solving. To address these challenges, researchers have employed carefully designed prompts and flowcharts, simulating human cognitive processes to enhance LLM performance, such as the Chain of Thought approach. In this paper, we introduce MTMT (Multi-thinking Modes Tree), a novel method that interacts with LLMs to construct a thought tree, simulating various advanced cognitive processes, including but not limited to association, counterfactual thinking, task decomposition, and comparison. By breaking down the original complex task into simpler sub-questions, MTMT facilitates easier problem-solving for LLMs, enabling more effective utilization of the latent knowledge within LLMs. We evaluate the performance of MTMT under different parameter configurations, using GPT-4o mini as the base model. Our results demonstrate that integrating multiple modes of thinking significantly enhances the ability of LLMs to handle complex tasks.
翻译:大语言模型(LLMs)在需要复杂逻辑推理和多步骤问题解决的任务中表现出局限性。为应对这些挑战,研究者采用了精心设计的提示词和流程图,模拟人类认知过程以提升LLM性能,例如思维链方法。本文提出MTMT(多思维模式树),这是一种通过与LLMs交互构建思维树的新方法,模拟包括但不限于联想、反事实思考、任务分解与比较等多种高级认知过程。通过将原始复杂任务分解为更简单的子问题,MTMT使LLMs能更轻松地解决问题,从而更有效地利用其潜在知识。我们以GPT-4o mini为基础模型,在不同参数配置下评估MTMT的性能。结果表明,整合多种思维模式能显著提升LLMs处理复杂任务的能力。