The practice of fine-tuning Large Language Models (LLMs) has achieved state-of-the-art performance on specialized tasks, yet diagnosing why these models become brittle and fail to generalize remains a critical open problem. To address this, we introduce and apply a multi-layered diagnostic framework to a cross-architectural study. We fine-tune Llama 3.1 8B, Gemma 2 9B, and Mistral models on a high-stakes phishing detection task and use SHAP analysis and mechanistic interpretability to uncover the root causes of their generalization failures. Our investigation reveals three critical findings: (1) Generalization is driven by a powerful synergy between architecture and data diversity. The Gemma 2 9B model achieves state-of-the-art performance (>91\% F1), but only when trained on a stylistically diverse ``generalist'' dataset. (2) Generalization is highly architecture-dependent. We diagnose a specific failure mode in Llama 3.1 8B, which performs well on a narrow domain but cannot integrate diverse data, leading to a significant performance drop. (3) Some architectures are inherently more generalizable. The Mistral model proves to be a consistent and resilient performer across multiple training paradigms. By pinpointing the flawed heuristics responsible for these failures, our work provides a concrete methodology for diagnosing and understanding generalization failures, underscoring that reliable AI requires deep validation of the interplay between architecture, data, and training strategy.
翻译:微调大语言模型(LLMs)的实践已在专业任务上取得最先进的性能,但诊断这些模型为何变得脆弱且无法泛化仍然是一个关键未解难题。为此,我们引入并应用多层诊断框架开展跨架构研究。我们在高风险钓鱼检测任务上对 Llama 3.1 8B、Gemma 2 9B 和 Mistral 模型进行微调,并利用 SHAP 分析与机制可解释性揭示其泛化失效的根本原因。我们的研究揭示了三个关键发现:(1)泛化能力由架构与数据多样性的强大协同作用驱动。Gemma 2 9B 模型仅当在风格多样的“通才”数据集上训练时,才能达到最先进的性能(>91% F1)。(2)泛化能力高度依赖架构。我们诊断出 Llama 3.1 8B 存在特定失效模式:该模型在狭窄领域表现良好,但无法整合多样化数据,导致性能显著下降。(3)某些架构天生更具泛化能力。Mistral 模型在多种训练范式下均表现出稳定而稳健的性能。通过精确定位导致这些失效的缺陷启发式规则,我们的研究为诊断和理解泛化失效提供了具体方法论,并强调可靠的人工智能需要深入验证架构、数据与训练策略之间的相互作用。