Large language models excel across diverse domains, yet their deployment in healthcare, legal systems, and autonomous decision-making remains limited by incomplete understanding of their internal mechanisms. As these models integrate into high-stakes systems, understanding how they encode capabilities has become fundamental to interpretability research. Traditional approaches identify important modules through gradient attribution or activation analysis, assuming specific capabilities map to specific components. However, this oversimplifies neural computation: modules may contribute to multiple capabilities simultaneously, while single capabilities may distribute across multiple modules. These coarse-grained analyses fail to capture fine-grained, distributed capability encoding. We present SCALPEL (Selective Capability Ablation via Low-rank Parameter Editing for Large language models), a framework representing capabilities as low-rank parameter subspaces rather than discrete modules. Our key insight is that capabilities can be characterized by low-rank modifications distributed across layers and modules, enabling precise capability removal without affecting others. By training LoRA adapters to reduce distinguishing correct from incorrect answers while preserving general language modeling quality, SCALPEL identifies low-rank representations responsible for particular capabilities while remaining disentangled from others. Experiments across diverse capability and linguistic tasks from BLiMP demonstrate that SCALPEL successfully removes target capabilities while preserving general capabilities, providing fine-grained insights into capability distribution across parameter space. Results reveal that capabilities exhibit low-rank structure and can be selectively ablated through targeted parameter-space interventions, offering nuanced understanding of capability encoding in LLMs.
翻译:大语言模型在多个领域表现出色,但其在医疗、法律系统和自主决策等领域的部署仍受限于对其内部机制理解的不完整。随着这些模型被集成到高风险系统中,理解它们如何编码能力已成为可解释性研究的基础。传统方法通过梯度归因或激活分析来识别重要模块,假设特定能力映射到特定组件。然而,这过度简化了神经计算:模块可能同时贡献于多种能力,而单一能力可能分布在多个模块中。这些粗粒度分析无法捕捉细粒度、分布式的能力编码。我们提出了SCALPEL(通过低秩参数编辑实现大语言模型选择性能力消融),这是一个将能力表示为低秩参数子空间而非离散模块的框架。我们的核心见解是,能力可以通过分布在多个层和模块中的低秩修改来表征,从而能够在不影响其他能力的情况下精确移除特定能力。通过训练LoRA适配器来减少正确与错误答案的区分度,同时保持通用语言建模质量,SCALPEL识别出负责特定能力的低秩表示,同时与其他能力保持解耦。在BLiMP数据集上的多种能力和语言任务实验表明,SCALPEL成功移除了目标能力,同时保留了通用能力,为能力在参数空间中的分布提供了细粒度的洞察。结果表明,能力表现出低秩结构,可以通过有针对性的参数空间干预进行选择性消融,从而为大语言模型中的能力编码提供了细致入微的理解。