Recent advances in LLMs have reignited scientific debate over whether embodiment is necessary for intelligence. We present the argument that intelligence requires grounding, a phenomenon entailed by embodiment, but not embodiment itself. We define intelligence as the possession of four properties -- motivation, predictive ability, understanding of causality, and learning from experience -- and argue that each can be achieved by a non-embodied, grounded agent. We use this to conclude that grounding, not embodiment, is necessary for intelligence. We then present a thought experiment of an intelligent LLM agent in a digital environment and address potential counterarguments.
翻译:近年来,大语言模型(LLMs)的进展重新引发了关于智能是否需要实体化的科学争论。本文提出论点:智能需要具身性——这是实体化所蕴含的现象,但并非实体化本身。我们将智能定义为具备四种特性——动机、预测能力、因果关系的理解以及从经验中学习——并论证每种特性均可由非实体化但具身的智能体实现。由此我们得出结论:智能需要的是具身性,而非实体化。随后,我们通过一个数字环境中智能LLM智能体的思想实验展开论述,并对可能的反驳观点予以回应。