Recent work has shown that scaling large language models (LLMs) improves their alignment with human brain activity, yet it remains unclear what drives these gains and which representational properties are responsible. Although larger models often yield better task performance and brain alignment, they are increasingly difficult to analyze mechanistically. This raises a fundamental question: what is the minimal model capacity required to capture brain-relevant representations? To address this question, we systematically investigate how constraining model scale and numerical precision affects brain alignment. We compare full-precision LLMs, small language models (SLMs), and compressed variants (quantized and pruned) by predicting fMRI responses during naturalistic language comprehension. Across model families up to 14B parameters, we find that 3B SLMs achieve brain predictivity indistinguishable from larger LLMs, whereas 1B models degrade substantially, particularly in semantic language regions. Brain alignment is remarkably robust to compression: most quantization and pruning methods preserve neural predictivity, with GPTQ as a consistent exception. Linguistic probing reveals a dissociation between task performance and brain predictivity: compression degrades discourse, syntax, and morphology, yet brain predictivity remains largely unchanged. Overall, brain alignment saturates at modest model scales and is resilient to compression, challenging common assumptions about neural scaling and motivating compact models for brain-aligned language modeling.
翻译:近期研究表明,扩展大型语言模型(LLMs)的规模可提升其与人类大脑活动的对齐度,然而这些增益的驱动因素及具体由哪些表征属性负责仍不明确。尽管更大规模的模型通常能带来更好的任务性能与脑对齐度,但其机制分析难度日益增加。这引发了一个根本性问题:捕捉与大脑相关的表征所需的最小模型容量是多少?为探究此问题,我们系统研究了约束模型规模与数值精度如何影响脑对齐度。通过预测自然语言理解过程中的fMRI响应,我们比较了全精度LLMs、小型语言模型(SLMs)及其压缩变体(量化和剪枝)。在参数规模达140亿的模型系列中,我们发现30亿参数的SLMs所实现的脑预测能力与更大规模LLMs无显著差异,而10亿参数模型的性能则大幅下降,尤其在语义语言区域。脑对齐度对压缩表现出显著鲁棒性:大多数量化与剪枝方法均能保持神经预测能力,其中GPTQ是系统性例外。语言探针分析揭示了任务性能与脑预测能力之间的分离现象:压缩会损害语篇、句法和形态学层面的表现,但脑预测能力基本保持不变。总体而言,脑对齐度在中等模型规模下即趋于饱和,且对压缩具有强韧性,这对神经扩展的常见假设提出了挑战,并为构建脑对齐的紧凑语言模型提供了新思路。