Although state-of-the-art LLMs can solve math problems, we find that they make errors on numerical comparisons with mixed notation: "Which is larger, $5.7 \times 10^2$ or $580$?" This raises a fundamental question: Do LLMs even know how big these numbers are? We probe the hidden states of several smaller open-source LLMs. A single linear projection of an appropriate hidden layer encodes the log-magnitudes of both kinds of numerals, allowing us to recover the numbers with relative error of about 2.3% (on restricted synthetic text) or 19.06% (on scientific papers). Furthermore, the hidden state after reading a pair of numerals encodes their ranking, with a linear classifier achieving over 90% accuracy. Yet surprisingly, when explicitly asked to rank the same pairs of numerals, these LLMs achieve only 50-70% accuracy, with worse performance for models whose probes are less effective. Finally, we show that incorporating the classifier probe's log-loss as an auxiliary objective during finetuning brings an additional 3.22% improvement in verbalized accuracy over base models, demonstrating that improving models' internal magnitude representations can enhance their numerical reasoning capabilities. Our code is available at https://github.com/VCY019/Numeracy-Probing.
翻译:尽管当前最先进的大语言模型能够解决数学问题,我们发现它们在处理混合表示法的数值比较时会出现错误:"哪个更大:$5.7 \times 10^2$ 还是 $580$?" 这引发了一个根本性问题:大语言模型是否真正理解这些数字的大小?我们通过探测多个开源小规模大语言模型的隐藏状态发现,对适当隐藏层的单一线性投影能够编码两种数字表示法的对数量级,使我们在受限合成文本上能以约2.3%的相对误差恢复数值,在科学论文上达到19.06%的相对误差。此外,模型在读取数字对后的隐藏状态编码了它们的排序关系,线性分类器在此任务上可获得超过90%的准确率。然而令人惊讶的是,当明确要求对这些相同数字对进行排序时,这些大语言模型仅能达到50-70%的准确率,且探测效果较差的模型表现更差。最后,我们通过在微调过程中将分类器探针的对数损失作为辅助目标,使模型在数值表述准确率上较基础模型额外提升3.22%,证明改进模型内部的数值表征能力能够增强其数值推理性能。相关代码已发布于 https://github.com/VCY019/Numeracy-Probing。