In recent years, multimodal large language models (MLLMs) have significantly advanced, integrating more modalities into diverse applications. However, the lack of explainability remains a major barrier to their use in scenarios requiring decision transparency. Current neuron-level explanation paradigms mainly focus on knowledge localization or language- and domain-specific analyses, leaving the exploration of multimodality largely unaddressed. To tackle these challenges, we propose MINER, a transferable framework for mining modality-specific neurons (MSNs) in MLLMs, which comprises four stages: (1) modality separation, (2) importance score calculation, (3) importance score aggregation, (4) modality-specific neuron selection. Extensive experiments across six benchmarks and two representative MLLMs show that (I) deactivating ONLY 2% of MSNs significantly reduces MLLMs performance (0.56 to 0.24 for Qwen2-VL, 0.69 to 0.31 for Qwen2-Audio), (II) different modalities mainly converge in the lower layers, (III) MSNs influence how key information from various modalities converges to the last token, (IV) two intriguing phenomena worth further investigation, i.e., semantic probing and semantic telomeres. The source code is available at this URL.
翻译:近年来,多模态大语言模型(MLLMs)取得了显著进展,将更多模态整合到多样化应用中。然而,可解释性的缺乏仍然是其在需要决策透明度的场景中应用的主要障碍。当前的神经元级解释范式主要关注知识定位或语言及领域特定分析,对多模态性的探索在很大程度上仍未得到解决。为应对这些挑战,我们提出了MINER,一个用于挖掘MLLMs中模态特定神经元(MSNs)的可迁移框架,该框架包含四个阶段:(1)模态分离,(2)重要性分数计算,(3)重要性分数聚合,(4)模态特定神经元选择。在六个基准测试和两个代表性MLLMs上进行的大量实验表明:(I)仅停用2%的MSNs即可显著降低MLLMs的性能(Qwen2-VL从0.56降至0.24,Qwen2-Audio从0.69降至0.31),(II)不同模态主要在较低层汇聚,(III)MSNs影响来自不同模态的关键信息如何汇聚到最后一个token,(IV)两个值得进一步研究的现象,即语义探测和语义端粒。源代码可通过此URL获取。