Multimodal neuroimaging provides complementary insights for Alzheimer's disease diagnosis, yet clinical datasets frequently suffer from missing modalities. We propose ACADiff, a framework that synthesizes missing brain imaging modalities through adaptive clinical-aware diffusion. ACADiff learns mappings between incomplete multimodal observations and target modalities by progressively denoising latent representations while attending to available imaging data and clinical metadata. The framework employs adaptive fusion that dynamically reconfigures based on input availability, coupled with semantic clinical guidance via GPT-4o-encoded prompts. Three specialized generators enable bidirectional synthesis among sMRI, FDG-PET, and AV45-PET. Evaluated on ADNI subjects, ACADiff achieves superior generation quality and maintains robust diagnostic performance even under extreme 80\% missing scenarios, outperforming all existing baselines. To promote reproducibility, code is available at https://github.com/rongzhou7/ACADiff
翻译:多模态神经影像为阿尔茨海默病诊断提供了互补性见解,然而临床数据集常存在模态缺失问题。本文提出ACADiff框架,通过自适应临床感知扩散合成缺失的脑成像模态。ACADiff通过渐进去噪潜在表征,同时关注可用的影像数据和临床元数据,学习不完整多模态观测与目标模态间的映射关系。该框架采用基于输入可用性动态重构的自适应融合机制,并结合通过GPT-4o编码提示实现的语义临床引导。三个专用生成器实现了sMRI、FDG-PET与AV45-PET之间的双向合成。在ADNI受试者上的评估表明,ACADiff在生成质量上表现优异,即使在极端80%缺失场景下仍保持稳健的诊断性能,超越所有现有基线方法。为促进可复现性,代码已发布于https://github.com/rongzhou7/ACADiff