This research aims to develop a novel deep learning network, GBU-Net, utilizing a group-batch-normalized U-Net framework, specifically designed for the precise semantic segmentation of the left ventricle in short-axis cine MRI scans. The methodology includes a down-sampling pathway for feature extraction and an up-sampling pathway for detail restoration, enhanced for medical imaging. Key modifications include techniques for better contextual understanding crucial in cardiac MRI segmentation. The dataset consists of 805 left ventricular MRI scans from 45 patients, with comparative analysis using established metrics such as the dice coefficient and mean perpendicular distance. GBU-Net significantly improves the accuracy of left ventricle segmentation in cine MRI scans. Its innovative design outperforms existing methods in tests, surpassing standard metrics like the dice coefficient and mean perpendicular distance. The approach is unique in its ability to capture contextual information, often missed in traditional CNN-based segmentation. An ensemble of the GBU-Net attains a 97% dice score on the SunnyBrook testing dataset. GBU-Net offers enhanced precision and contextual understanding in left ventricle segmentation for surgical robotics and medical analysis.
翻译:本研究旨在开发一种新型深度学习网络GBU-Net,该网络采用分组批量归一化U-Net框架,专门设计用于短轴电影磁共振成像扫描中左心室的精确语义分割。该方法包含用于特征提取的下采样路径和用于细节恢复的上采样路径,并针对医学成像进行了优化。关键改进包括增强上下文理解能力的技术,这在心脏磁共振成像分割中至关重要。数据集包含来自45名患者的805个左心室磁共振成像扫描,并使用骰子系数和平均垂直距离等既定指标进行对比分析。GBU-Net显著提高了电影磁共振成像扫描中左心室分割的准确性。其创新设计在测试中优于现有方法,超越了骰子系数和平均垂直距离等标准指标。该方法的独特之处在于能够捕获传统基于CNN的分割方法经常遗漏的上下文信息。GBU-Net集成模型在SunnyBrook测试数据集上获得了97%的骰子分数。GBU-Net为手术机器人和医学分析中的左心室分割提供了更高的精度和上下文理解能力。