We argue that language-only models don't learn the physical manifestation of language. We present an empirical investigation of visual-auditory properties of language through a series of tasks, termed H-Test. These tasks highlight a fundamental gap between human linguistic understanding and the sensory-deprived linguistic understanding of LLMs. In support of our hypothesis, 1. deliberate reasoning (Chain-of-Thought), 2. few-shot examples, or 3. stronger LLM from the same model family (LLaMA 2 13B -> LLaMA 2 70B) has no significant effect on H-Test performance. We bring in the philosophical case of Mary, who learns about the world in a sensory-deprived environment as a useful conceptual framework to understand how language-only models learn about the world (Jackson, 1986). Our experiments show that some of the strongest proprietary LLMs stay near random chance baseline accuracy of 50%, highlighting the limitations of linguistic knowledge acquired in the absence of sensory experience. Our code and data are available at <github.com/brucewlee/h-test>.
翻译:我们认为,仅基于文本的语言模型无法习得语言的物理表现。我们通过一系列被称为H-Test的任务,对语言的视觉-听觉属性进行了实证研究。这些任务凸显了人类语言理解与大型语言模型(LLMs)的感官剥夺式语言理解之间的根本差距。为支持我们的假设,我们证明:1. 深思熟虑的推理(思维链);2. 少样本示例;或3. 同一模型系列中更强的LLM(如LLaMA 2 13B→LLaMA 2 70B)对H-Test性能均无显著影响。我们引入哲学中的玛丽案例——她在感官剥夺环境中认知世界——作为理解纯文本语言模型如何学习世界的概念性框架(Jackson, 1986)。实验表明,部分最强的专有LLM的准确率仍接近随机基准的50%,凸显了缺乏感官体验时语言知识获取的局限性。我们的代码与数据已发布于<github.com/brucewlee/h-test>。