Mixture-of-Experts (MoE) architectures have significantly contributed to scalable machine learning by enabling specialized subnetworks to tackle complex tasks efficiently. However, traditional MoE systems lack domain-specific constraints essential for medical imaging, where anatomical structure and regional disease heterogeneity strongly influence pathological patterns. Here, we introduce \textit{Regional Expert Networks (REN)}, the first anatomically-informed MoE framework tailored specifically for medical image classification. REN leverages anatomical priors to train seven specialized experts, each dedicated to distinct lung lobes and bilateral lung combinations, enabling precise modeling of region-specific pathological variations. Multi-modal gating mechanisms dynamically integrate radiomics biomarkers and deep learning (DL) features (CNN, ViT, Mamba) to weight expert contributions optimally. Applied to interstitial lung disease (ILD) classification, REN achieves consistently superior performance: the radiomics-guided ensemble reached an average AUC of 0.8646 +- 0.0467, a +12.5\% improvement over the SwinUNETR baseline (AUC 0.7685, p=0.031). Region-specific experts further revealed that lower-lobe models achieved AUCs of 0.88-0.90, surpassing DL counterparts (CNN: 0.76-0.79) and aligning with known disease progression patterns. Through rigorous patient-level cross-validation, REN demonstrates strong generalizability and clinical interpretability, presenting a scalable, anatomically-guided approach readily extensible to other structured medical imaging applications. Code is available on our GitHub: https://github.com/NUBagciLab/MoE-REN.
翻译:专家混合(MoE)架构通过启用专门子网络高效处理复杂任务,为可扩展机器学习做出了重要贡献。然而,传统MoE系统缺乏医学影像领域所需的关键领域特定约束,其中解剖结构与区域疾病异质性对病理模式具有重要影响。本文提出首个专为医学图像分类设计的解剖学引导MoE框架——\textit{区域专家网络(REN)}。REN利用解剖学先验知识训练七个专门化专家模型,分别针对不同肺叶及双侧肺组合,实现对区域特异性病理变化的精确建模。多模态门控机制动态整合影像组学生物标志物与深度学习特征(CNN、ViT、Mamba),以优化专家贡献权重。在间质性肺疾病(ILD)分类任务中,REN展现出持续优越性能:影像组学引导的集成模型平均AUC达0.8646 ± 0.0467,较SwinUNETR基线模型(AUC 0.7685,p=0.031)提升12.5%。区域特异性专家模型进一步揭示下叶模型AUC达0.88-0.90,超越对应深度学习模型(CNN:0.76-0.79),并与已知疾病进展模式相符。通过严格的患者级别交叉验证,REN展现出强大的泛化能力与临床可解释性,为结构化医学影像应用提供了可扩展的解剖学引导解决方案。代码已开源:https://github.com/NUBagciLab/MoE-REN。