Deep learning has significantly advanced automated brain tumor diagnosis, yet clinical adoption remains limited by interpretability and computational constraints. Conventional models often act as opaque ''black boxes'' and fail to quantify the complex, irregular tumor boundaries that characterize malignant growth. To address these challenges, we present XMorph, an explainable and computationally efficient framework for fine-grained classification of three prominent brain tumor types: glioma, meningioma, and pituitary tumors. We propose an Information-Weighted Boundary Normalization (IWBN) mechanism that emphasizes diagnostically relevant boundary regions alongside nonlinear chaotic and clinically validated features, enabling a richer morphological representation of tumor growth. A dual-channel explainable AI module combines GradCAM++ visual cues with LLM-generated textual rationales, translating model reasoning into clinically interpretable insights. The proposed framework achieves a classification accuracy of 96.0%, demonstrating that explainability and high performance can co-exist in AI-based medical imaging systems. The source code and materials for XMorph are all publicly available at: https://github.com/ALSER-Lab/XMorph.
翻译:深度学习已显著推进了脑肿瘤的自动化诊断,但可解释性与计算限制仍制约其临床采纳。传统模型常被视为不透明的“黑箱”,且难以量化表征恶性生长的复杂、不规则肿瘤边界。为应对这些挑战,我们提出XMorph——一个可解释且计算高效的框架,用于对三种主要脑肿瘤类型(胶质瘤、脑膜瘤和垂体瘤)进行细粒度分类。我们提出一种信息加权边界归一化机制,该机制在结合非线性混沌特征与临床验证特征的同时,强调诊断相关的边界区域,从而实现对肿瘤生长更丰富的形态学表征。双通道可解释AI模块将GradCAM++视觉线索与大语言模型生成的文本推理相结合,将模型决策过程转化为临床可解释的洞察。所提框架实现了96.0%的分类准确率,证明在基于AI的医学影像系统中,可解释性与高性能可以共存。XMorph的源代码及相关材料均已公开于:https://github.com/ALSER-Lab/XMorph。