Large language models (LLMs) can be adapted to new tasks using parameter-efficient fine-tuning (PEFT) methods that modify only a small number of trainable parameters, often through low-rank updates. In this work, we adopt a quantum-information-inspired perspective to understand their effectiveness. From this perspective, low-rank parameterizations naturally correspond to low-dimensional Matrix Product States (MPS) representations, which enable entanglement-based characterizations of parameter structure. Thereby, we term and measure "Artificial Entanglement", defined as the entanglement entropy of the parameters in artificial neural networks (in particular the LLMs). We first study the representative low-rank adaptation (LoRA) PEFT method, alongside full fine-tuning (FFT), using LLaMA models at the 1B and 8B scales trained on the Tulu3 and OpenThoughts3 datasets, and uncover: (i) Internal artificial entanglement in the updates of query and value projection matrices in LoRA follows a volume law with a central suppression (termed as the "Entanglement Valley"), which is sensitive to hyper-parameters and is distinct from that in FFT; (ii) External artificial entanglement in attention matrices, corresponding to token-token correlations in representation space, follows an area law with logarithmic corrections and remains robust to LoRA hyper-parameters and training steps. Drawing a parallel to the No-Hair Theorem in black hole physics, we propose that although LoRA and FFT induce distinct internal entanglement signatures, such differences do not manifest in the attention outputs, suggesting a "no-hair" property that results in the effectiveness of low rank updates. We further provide theoretical support based on random matrix theory, and extend our analysis to an MPS Adaptation PEFT method, which exhibits qualitatively similar behaviors.
翻译:大型语言模型(LLMs)可通过仅修改少量可训练参数的参数高效微调(PEFT)方法适应新任务,这类方法通常采用低秩更新。本研究采用量子信息启发的视角来理解其有效性。从该视角看,低秩参数化自然对应于低维矩阵乘积态(MPS)表示,从而支持基于纠缠的参数结构表征。由此,我们提出并度量"人工纠缠",即人工神经网络(特别是LLMs)参数的纠缠熵。我们首先以在Tulu3和OpenThoughts3数据集上训练的1B和8B规模LLaMA模型为对象,研究了代表性的低秩适应(LoRA)PEFT方法与全参数微调(FFT),发现:(i)LoRA中查询与值投影矩阵更新的内部人工纠缠遵循具有中心抑制的体律(称为"纠缠谷"),其对超参数敏感且与FFT中的模式不同;(ii)注意力矩阵中的外部人工纠缠(对应表示空间中词元-词元相关性)遵循带对数修正的面积律,且对LoRA超参数和训练步数保持稳健。通过类比黑洞物理中的无毛定理,我们提出:虽然LoRA和FFT会引发不同的内部纠缠特征,但此类差异不会在注意力输出中显现,这暗示了导致低秩更新有效的"无毛"特性。我们进一步基于随机矩阵理论提供了理论支持,并将分析拓展至MPS Adaptation PEFT方法,该方法展现出定性相似的行为。