Predicting clinical outcomes from brain networks in large-scale neuroimaging cohorts such as the Adolescent Brain Cognitive Development (ABCD) study requires effectively integrating functional connectivity (FC) and structural connectivity (SC) while incorporating expert neurobiological knowledge. However, existing multimodal fusion approaches are shallow or over-homogenize the inherently heterogeneous characteristics of FC and SC, while expert-defined anatomical priors are underutilized with static integration. To address these limitations, we propose Brain Transformer with Adaptive Mutual-Distill and Selective Prior Fusion (BrainTAP). We introduce Adaptive Mutual Distill (AMD), which enables layer-wise information exchange between modalities through learnable distill-intact ratios, preserving modality-specific signals while capturing cross-modal synergies. We further develop Selective Prior Fusion (SPF), which integrates expert-defined anatomical priors in an adaptive way. Evaluated on the ABCD dataset for predicting attention-related disorders, BrainTAP achieves superior performance over state-of-the-art baselines, demonstrating its effectiveness for brain disorder prediction.
翻译:从大规模神经影像队列(如青少年大脑认知发展(ABCD)研究)中的大脑网络预测临床结果,需要有效整合功能连接(FC)与结构连接(SC),并融入专家神经生物学知识。然而,现有的多模态融合方法较为浅层,或过度同质化了FC与SC固有的异质性特征,同时专家定义的解剖先验知识仅通过静态整合方式被利用,未能充分发挥作用。为应对这些局限,我们提出了基于自适应互蒸馏与选择性先验融合的脑Transformer模型(BrainTAP)。我们引入了自适应互蒸馏(AMD)机制,该机制通过可学习的蒸馏-保留比例实现模态间的逐层信息交换,在保留模态特定信号的同时捕捉跨模态协同效应。我们进一步开发了选择性先验融合(SPF)模块,以自适应方式整合专家定义的解剖先验知识。在ABCD数据集上针对注意力相关障碍预测任务进行评估,BrainTAP取得了优于现有先进基线的性能,证明了其在脑疾病预测中的有效性。