Brain transcriptomics provides insights into the molecular mechanisms by which the brain coordinates its functions and processes. However, existing multimodal methods for predicting Alzheimer's disease (AD) primarily rely on imaging and sometimes genetic data, often neglecting the transcriptomic basis of brain. Furthermore, while striving to integrate complementary information between modalities, most studies overlook the informativeness disparities between modalities. Here, we propose TMM, a trusted multiview multimodal graph attention framework for AD diagnosis, using extensive brain-wide transcriptomics and imaging data. First, we construct view-specific brain regional co-function networks (RRIs) from transcriptomics and multimodal radiomics data to incorporate interaction information from both biomolecular and imaging perspectives. Next, we apply graph attention (GAT) processing to each RRI network to produce graph embeddings and employ cross-modal attention to fuse transcriptomics-derived embedding with each imagingderived embedding. Finally, a novel true-false-harmonized class probability (TFCP) strategy is designed to assess and adaptively adjust the prediction confidence of each modality for AD diagnosis. We evaluate TMM using the AHBA database with brain-wide transcriptomics data and the ADNI database with three imaging modalities (AV45-PET, FDG-PET, and VBM-MRI). The results demonstrate the superiority of our method in identifying AD, EMCI, and LMCI compared to state-of-the-arts. Code and data are available at https://github.com/Yaolab-fantastic/TMM.
翻译:脑转录组学为理解大脑协调其功能与过程的分子机制提供了重要见解。然而,现有的阿尔茨海默病(AD)多模态预测方法主要依赖成像数据,有时结合遗传数据,往往忽视了大脑的转录组学基础。此外,尽管多数研究致力于整合不同模态间的互补信息,却普遍忽略了模态间信息丰富度的差异。本文提出TMM——一种基于全脑转录组学与成像数据的可信多视角多模态图注意力诊断框架。首先,我们分别从转录组学和多模态影像组学数据构建视角特异性的脑区协同功能网络(RRI),以融合生物分子与成像双重视角下的交互信息。接着,对每个RRI网络应用图注意力(GAT)处理以生成图嵌入,并采用跨模态注意力机制将转录组学衍生的嵌入与各成像模态衍生的嵌入进行融合。最后,设计了一种新颖的真伪协调类别概率(TFCP)策略,用于评估并自适应调整各模态对AD诊断的预测置信度。我们使用包含全脑转录组学数据的AHBA数据库,以及包含三种成像模态(AV45-PET、FDG-PET和VBM-MRI)的ADNI数据库对TMM进行评估。实验结果表明,在识别AD、EMCI和LMCI方面,本方法相较于现有最优方法具有显著优势。代码与数据公开于 https://github.com/Yaolab-fantastic/TMM。