We present a systematic study of medical-domain interpretability in Large Language Models (LLMs). We study how the LLMs both represent and process medical knowledge through four different interpretability techniques: (1) UMAP projections of intermediate activations, (2) gradient-based saliency with respect to the model weights, (3) layer lesioning/removal and (4) activation patching. We present knowledge maps of five LLMs which show, at a coarse-resolution, where knowledge about patient's ages, medical symptoms, diseases and drugs is stored in the models. In particular for Llama3.3-70B, we find that most medical knowledge is processed in the first half of the model's layers. In addition, we find several interesting phenomena: (i) age is often encoded in a non-linear and sometimes discontinuous manner at intermediate layers in the models, (ii) the disease progression representation is non-monotonic and circular at certain layers of the model, (iii) in Llama3.3-70B, drugs cluster better by medical specialty rather than mechanism of action, especially for Llama3.3-70B and (iv) Gemma3-27B and MedGemma-27B have activations that collapse at intermediate layers but recover by the final layers. These results can guide future research on fine-tuning, un-learning or de-biasing LLMs for medical tasks by suggesting at which layers in the model these techniques should be applied.
翻译:我们对大型语言模型(LLMs)在医学领域的可解释性进行了系统性研究。通过四种不同的可解释性技术,我们探究了LLMs如何表征和处理医学知识:(1)中间激活的UMAP投影,(2)基于梯度的权重显著性分析,(3)层损伤/移除,以及(4)激活修补。我们绘制了五个LLMs的知识图谱,这些图谱在粗粒度上展示了关于患者年龄、医学症状、疾病和药物的知识在模型中的存储位置。特别是对于Llama3.3-70B,我们发现大部分医学知识在模型的前半部分层中被处理。此外,我们发现了几个有趣的现象:(i)年龄信息在模型的中间层中通常以非线性且有时不连续的方式编码;(ii)疾病进展的表征在模型的某些层中是非单调且循环的;(iii)在Llama3.3-70B中,药物更倾向于按医学专科而非作用机制聚类,这一点在Llama3.3-70B中尤为明显;以及(iv)Gemma3-27B和MedGemma-27B的激活在中间层出现坍缩,但在最终层得以恢复。这些结果能够指导未来针对医学任务对LLMs进行微调、知识消除或去偏的研究,通过提示应在模型的哪些层应用这些技术。