Despite advances in deep learning for estimating brain age from structural MRI data, incorporating functional MRI data is challenging due to its complex structure and the noisy nature of functional connectivity measurements. To address this, we present the Multitask Adversarial Variational Autoencoder, a custom deep learning framework designed to improve brain age predictions through multimodal MRI data integration. This model separates latent variables into generic and unique codes, isolating shared and modality-specific features. By integrating multitask learning with sex classification as an additional task, the model captures sex-specific aging patterns. Evaluated on the OpenBHB dataset, a large multisite brain MRI collection, the model achieves a mean absolute error of 2.77 years, outperforming traditional methods. This success positions M-AVAE as a powerful tool for metaverse-based healthcare applications in brain age estimation.
翻译:尽管利用结构MRI数据估计脑龄的深度学习技术已取得进展,但功能MRI数据的整合仍面临挑战,这源于其复杂的结构特征及功能连接度量的噪声特性。为解决该问题,本文提出多任务对抗变分自编码器——一种专为通过多模态MRI数据融合提升脑龄预测性能而设计的定制深度学习框架。该模型将潜变量分解为通用编码与特异编码,从而分离出跨模态共享特征与模态特异性特征。通过将性别分类作为辅助任务融入多任务学习架构,模型能够捕捉性别特异性的衰老模式。在大型多中心脑MRI数据集OpenBHB上的评估表明,该模型取得了2.77年的平均绝对误差,性能优于传统方法。这一成果使M-AVAE成为元宇宙医疗场景中脑龄估计领域的有力工具。