Brain transcriptomics provides insights into the molecular mechanisms by which the brain coordinates its functions and processes. However, existing multimodal methods for predicting Alzheimer's disease (AD) primarily rely on imaging and sometimes genetic data, often neglecting the transcriptomic basis of brain. Furthermore, while striving to integrate complementary information between modalities, most studies overlook the informativeness disparities between modalities. Here, we propose TMM, a trusted multiview multimodal graph attention framework for AD diagnosis, using extensive brain-wide transcriptomics and imaging data. First, we construct view-specific brain regional co-function networks (RRIs) from transcriptomics and multimodal radiomics data to incorporate interaction information from both biomolecular and imaging perspectives. Next, we apply graph attention (GAT) processing to each RRI network to produce graph embeddings and employ cross-modal attention to fuse transcriptomics-derived embedding with each imagingderived embedding. Finally, a novel true-false-harmonized class probability (TFCP) strategy is designed to assess and adaptively adjust the prediction confidence of each modality for AD diagnosis. We evaluate TMM using the AHBA database with brain-wide transcriptomics data and the ADNI database with three imaging modalities (AV45-PET, FDG-PET, and VBM-MRI). The results demonstrate the superiority of our method in identifying AD, EMCI, and LMCI compared to state-of-the-arts. Code and data are available at https://github.com/Yaolab-fantastic/TMM.
翻译:脑转录组学为理解大脑协调其功能与过程的分子机制提供了重要见解。然而,现有的阿尔茨海默病(AD)多模态预测方法主要依赖成像数据,有时结合遗传数据,往往忽视了大脑的转录组学基础。此外,尽管多数研究致力于整合不同模态间的互补信息,但它们大多忽略了模态间信息丰富度的差异。为此,我们提出TMM,一个用于AD诊断的可信多视角多模态图注意力框架,该框架利用了大规模的全脑转录组学与成像数据。首先,我们从转录组学和多模态影像组学数据中构建了视角特异性的脑区协同功能网络(RRIs),以融合来自生物分子和成像视角的交互信息。接着,我们对每个RRI网络应用图注意力(GAT)处理以生成图嵌入,并采用跨模态注意力将转录组学衍生的嵌入与每个成像衍生的嵌入进行融合。最后,我们设计了一种新颖的真假协调类别概率(TFCP)策略,用于评估并自适应地调整每个模态对AD诊断的预测置信度。我们使用包含全脑转录组学数据的AHBA数据库,以及包含三种成像模态(AV45-PET、FDG-PET和VBM-MRI)的ADNI数据库对TMM进行了评估。结果表明,在识别AD、EMCI和LMCI方面,我们的方法相较于现有先进技术具有优越性。代码与数据可在 https://github.com/Yaolab-fantastic/TMM 获取。