Reasoning has long been understood as a pathway between stages of understanding. Proper reasoning leads to understanding of a given subject. This reasoning was conceptualized as a process of understanding in a particular way, i.e., "symbolic reasoning". Foundational Models (FM) demonstrate that this is not a necessary condition for many reasoning tasks: they can "reason" by way of imitating the process of "thinking out loud", testing the produced pathways, and iterating on these pathways on their own. This leads to some form of reasoning that can solve problems on its own or with few-shot learning, but appears fundamentally different from human reasoning due to its lack of grounding and common sense, leading to brittleness of the reasoning process. These insights promise to substantially alter our assessment of reasoning and its necessary conditions, but also inform the approaches to safety and robust defences against this brittleness of FMs. This paper offers and discusses several philosophical interpretations of this phenomenon, argues that the previously apt metaphor of the "stochastic parrot" has lost its relevance and thus should be abandoned, and reflects on different normative elements in the safety- and appropriateness-considerations emerging from these reasoning models and their growing capacity.
翻译:长期以来,推理被理解为理解阶段之间的路径。正确的推理会导向对特定主题的理解。这种推理被概念化为以特定方式进行理解的过程,即"符号推理"。基础模型(FM)表明,对于许多推理任务而言,这并非必要条件:它们能够通过模仿"出声思考"的过程、测试生成的路径并自主迭代这些路径来进行"推理"。这催生了一种能够自主或通过少量样本学习解决问题的推理形式,但由于缺乏根基性和常识,其推理过程表现出脆弱性,从而与人类推理存在本质差异。这些洞见不仅将从根本上改变我们对推理及其必要条件的评估,也为应对基础模型脆弱性的安全方法与鲁棒防御策略提供了启示。本文提出并讨论了关于这一现象的若干哲学阐释,论证了先前贴切的"随机鹦鹉"隐喻已失去相关性而应被摒弃,并反思了由这些推理模型及其日益增强的能力所引发的安全性考量与适用性评估中不同的规范性要素。