Contrast medium plays a pivotal role in radiological imaging, as it amplifies lesion conspicuity and improves detection for the diagnosis of tumor-related diseases. However, depending on the patient's health condition or the medical resources available, the use of contrast medium is not always feasible. Recent work has explored AI-based image translation to synthesize contrast-enhanced images directly from non-contrast scans, aims to reduce side effects and streamlines clinical workflows. Progress in this direction has been constrained by data limitations: (1) existing public datasets focus almost exclusively on brain-related paired MR modalities; (2) other collections include partially paired data but suffer from missing modalities/timestamps and imperfect spatial alignment; (3) explicit labeling of CT vs. CTC or DCE phases is often absent; (4) substantial resources remain private. To bridge this gap, we introduce the first public, fully paired, pan-cancer medical imaging dataset spanning 11 human organs. The MR data include complete dynamic contrast-enhanced (DCE) sequences covering all three phases (DCE1-DCE3), while the CT data provide paired non-contrast and contrast-enhanced acquisitions (CTC). The dataset is curated for anatomical correspondence, enabling rigorous evaluation of 1-to-1, N-to-1, and N-to-N translation settings (e.g., predicting DCE phases from non-contrast inputs). Built upon this resource, we establish a comprehensive benchmark. We report results from representative baselines of contemporary image-to-image translation. We release the dataset and benchmark to catalyze research on safe, effective contrast synthesis, with direct relevance to multi-organ oncology imaging workflows. Our code and dataset are publicly available at https://github.com/YifanChen02/PMPBench.
翻译:对比剂在放射成像中起着关键作用,它能增强病灶的显影度并改善肿瘤相关疾病的诊断检测。然而,根据患者的健康状况或可用的医疗资源,使用对比剂并非总是可行。近期研究探索了基于人工智能的图像转换技术,旨在直接从非对比扫描中合成对比增强图像,以减少副作用并简化临床工作流程。这一方向的进展一直受到数据限制的制约:(1) 现有的公共数据集几乎完全集中于脑部相关的配对MR模态;(2) 其他数据集包含部分配对数据,但存在模态/时间戳缺失以及空间配准不完善的问题;(3) CT与CTC或DCE时相的明确标注常常缺失;(4) 大量资源仍为私有。为弥补这一差距,我们引入了首个公开、完全配对、涵盖11个人体器官的泛癌症医学影像数据集。MR数据包含覆盖所有三个时相(DCE1-DCE3)的完整动态对比增强序列,而CT数据则提供配对的非对比和对比增强采集。该数据集经过精心整理以确保解剖结构对应性,从而能够对1对1、N对1和N对N的转换设置(例如,从非对比输入预测DCE时相)进行严格评估。基于此资源,我们建立了一个全面的基准。我们报告了当代图像到图像转换代表性基线的结果。我们发布该数据集和基准,旨在推动安全、有效的对比剂合成研究,这对于多器官肿瘤学成像工作流程具有直接相关性。我们的代码和数据集公开在 https://github.com/YifanChen02/PMPBench。