Large Language Models (LLMs) are highly proficient in language-based tasks. Their language capabilities have positioned them at the forefront of the future AGI (Artificial General Intelligence) race. However, on closer inspection, Valmeekam et al. (2024); Zecevic et al. (2023); Wu et al. (2024) highlight a significant gap between their language proficiency and reasoning abilities. Reasoning in LLMs and Vision Language Models (VLMs) aims to bridge this gap by enabling these models to think and re-evaluate their actions and responses. Reasoning is an essential capability for complex problem-solving and a necessary step toward establishing trust in Artificial Intelligence (AI). This will make AI suitable for deployment in sensitive domains, such as healthcare, banking, law, defense, security etc. In recent times, with the advent of powerful reasoning models like OpenAI O1 and DeepSeek R1, reasoning endowment has become a critical research topic in LLMs. In this paper, we provide a detailed overview and comparison of existing reasoning techniques and present a systematic survey of reasoning-imbued language models. We also study current challenges and present our findings.
翻译:大语言模型(LLMs)在基于语言的任务中表现出色。其语言能力使其在通往未来通用人工智能(AGI)的竞赛中处于领先地位。然而,Valmeekam等人(2024)、Zecevic等人(2023)以及Wu等人(2024)的研究指出,其语言熟练度与推理能力之间存在显著差距。在大语言模型及视觉语言模型(VLMs)中引入推理,旨在通过使这些模型能够思考并重新评估其行为与回应,从而弥合这一差距。推理是解决复杂问题的关键能力,也是建立对人工智能(AI)信任的必要步骤。这将使AI适用于医疗、金融、法律、国防、安全等敏感领域的部署。近年来,随着OpenAI O1和DeepSeek R1等强大推理模型的出现,推理能力的赋予已成为大语言模型领域的一个关键研究课题。本文详细概述并比较了现有的推理技术,并对融合推理的语言模型进行了系统性综述。我们还探讨了当前面临的挑战并呈现了我们的研究发现。