Chain of Thought (CoT) was introduced in recent research as a method for improving step-by-step reasoning in Large Language Models. However, CoT has limited applications such as its need for hand-crafted few-shot exemplar prompts and no capability to adjust itself to different queries. In this work, we propose a system to automatically generate rationales using CoT. Our method improves multi-step implicit reasoning capabilities by decomposing the implicit query into several explicit questions. This provides interpretability for the model, improving reasoning in weaker LLMs. We test our approach with two Q\&A datasets: StrategyQA and HotpotQA. We show an increase in accuracy with both, especially on StrategyQA. To facilitate further research in this field, the complete source code for this study has been made publicly available on GitHub: https://github.com/miralab-ai/autoreason.
翻译:思维链(CoT)作为提升大语言模型逐步推理能力的方法,在近期研究中被提出。然而,CoT存在应用局限性,例如需要人工构建少样本示例提示,且无法针对不同查询进行自适应调整。本研究提出了一种基于CoT自动生成推理链的系统。该方法通过将隐式查询分解为若干显式问题,提升了多步隐式推理能力。这为模型提供了可解释性,从而增强了较弱大语言模型的推理性能。我们在两个问答数据集(StrategyQA与HotpotQA)上验证了本方法,结果显示两个数据集的准确率均有提升,其中StrategyQA的提升尤为显著。为促进该领域的进一步研究,本工作的完整源代码已在GitHub公开:https://github.com/miralab-ai/autoreason。