Generating synthetic Computed Tomography (CT) images from Cone Beam Computed Tomography (CBCT) is desirable for improving the image quality of CBCT. Existing synthetic CT (sCT) generation methods using Convolutional Neural Networks (CNN) and Transformers often face difficulties in effectively capturing both global and local features and contrasts for high-quality sCT generation. In this work, we propose a Global-Local Feature and Contrast learning (GLFC) framework for sCT generation. First, a Mamba-Enhanced UNet (MEUNet) is introduced by integrating Mamba blocks into the skip connections of a high-resolution UNet for effective global and local feature learning. Second, we propose a Multiple Contrast Loss (MCL) that calculates synthetic loss at different intensity windows to improve quality for both soft tissues and bone regions. Experiments on the SynthRAD2023 dataset demonstrate that GLFC improved the SSIM of sCT from 77.91% to 91.50% compared with the original CBCT, and significantly outperformed several existing methods for sCT generation. The code is available at https://github.com/intelland/GLFC
翻译:从锥形束计算机断层扫描(CBCT)生成合成计算机断层扫描(CT)图像对于提升CBCT的图像质量具有重要价值。现有基于卷积神经网络(CNN)和Transformer的合成CT(sCT)生成方法,往往难以同时有效捕获全局与局部特征及对比度,以生成高质量的sCT。本文提出一种用于sCT生成的全局-局部特征与对比学习(GLFC)框架。首先,通过将Mamba模块集成到高分辨率UNet的跳跃连接中,引入一种Mamba增强UNet(MEUNet),以实现有效的全局与局部特征学习。其次,我们提出一种多对比度损失(MCL),该损失在不同强度窗口计算合成损失,以同时提升软组织与骨骼区域的重建质量。在SynthRAD2023数据集上的实验表明,与原始CBCT相比,GLFC将sCT的结构相似性指数(SSIM)从77.91%提升至91.50%,并显著优于多种现有的sCT生成方法。代码发布于 https://github.com/intelland/GLFC。