Automated segmentation of cancerous lesions in PET/CT scans is a crucial first step in quantitative image analysis. However, training deep learning models for segmentation with high accuracy is particularly challenging due to the variations in lesion size, shape, and radiotracer uptake. These lesions can appear in different parts of the body, often near healthy organs that also exhibit considerable uptake, making the task even more complex. As a result, creating an effective segmentation model for routine PET/CT image analysis is challenging. In this study, we utilized a 3D Residual UNet model and employed the Generalized Dice Focal Loss function to train the model on the AutoPET Challenge 2024 dataset. We conducted a 5-fold cross-validation and used an average ensembling technique using the models from the five folds. In the preliminary test phase for Task-1, the average ensemble achieved a mean Dice Similarity Coefficient (DSC) of 0.6687, mean false negative volume (FNV) of 10.9522 ml and mean false positive volume (FPV) 2.9684 ml. More details about the algorithm can be found on our GitHub repository: https://github.com/ahxmeds/autosegnet2024.git. The training code has been shared via the repository: https://github.com/ahxmeds/autopet2024.git.
翻译:PET/CT扫描中癌性病灶的自动分割是定量图像分析的关键初始步骤。然而,由于病灶尺寸、形状及放射性示踪剂摄取的差异性,训练高精度的深度学习分割模型面临显著挑战。这些病灶可出现在身体不同部位,常邻近同样呈现显著摄取的正常器官,使得分割任务更为复杂。因此,构建适用于常规PET/CT图像分析的高效分割模型具有相当难度。本研究采用3D残差UNet模型,并运用广义Dice Focal Loss函数在AutoPET挑战赛2024数据集上进行训练。我们实施了五折交叉验证,并基于五折模型采用平均集成技术。在任务1的初步测试阶段,该平均集成模型取得了0.6687的平均Dice相似系数(DSC)、10.9522毫升的平均假阴性体积(FNV)及2.9684毫升的平均假阳性体积(FPV)。更多算法细节可访问GitHub代码库:https://github.com/ahxmeds/autosegnet2024.git。训练代码已通过以下仓库共享:https://github.com/ahxmeds/autopet2024.git。