There has been exploding interest in embracing Transformer-based architectures for medical image segmentation. However, the lack of large-scale annotated medical datasets make achieving performances equivalent to those in natural images challenging. Convolutional networks, in contrast, have higher inductive biases and consequently, are easily trainable to high performance. Recently, the ConvNeXt architecture attempted to modernize the standard ConvNet by mirroring Transformer blocks. In this work, we improve upon this to design a modernized and scalable convolutional architecture customized to challenges of data-scarce medical settings. We introduce MedNeXt, a Transformer-inspired large kernel segmentation network which introduces - 1) A fully ConvNeXt 3D Encoder-Decoder Network for medical image segmentation, 2) Residual ConvNeXt up and downsampling blocks to preserve semantic richness across scales, 3) A novel technique to iteratively increase kernel sizes by upsampling small kernel networks, to prevent performance saturation on limited medical data, 4) Compound scaling at multiple levels (depth, width, kernel size) of MedNeXt. This leads to state-of-the-art performance on 4 tasks on CT and MRI modalities and varying dataset sizes, representing a modernized deep architecture for medical image segmentation. Our code is made publicly available at: https://github.com/MIC-DKFZ/MedNeXt.
翻译:近年来,基于Transformer的架构在医学图像分割领域引起了爆炸性的关注。然而,由于缺乏大规模标注的医学数据集,要达到与自然图像处理相当的性能具有挑战性。相比之下,卷积网络具有更高的归纳偏置,因此更容易训练至高性能。最近,ConvNeXt架构试图通过模仿Transformer模块来现代化标准卷积网络。在本工作中,我们在此基础上进行改进,设计了一种现代化且可扩展的卷积架构,专门针对数据稀缺的医学场景中的挑战。我们提出了MedNeXt,这是一种受Transformer启发的大核分割网络,其创新包括:1)用于医学图像分割的完全ConvNeXt 3D编码器-解码器网络;2)保留跨尺度语义丰富性的残差ConvNeXt上采样和下采样模块;3)一种通过上采样小核网络迭代增加核尺寸的新技术,以防止在有限医学数据上出现性能饱和;4)在MedNeXt的多个层级(深度、宽度、核尺寸)进行复合缩放。这使得我们在CT和MRI模态的4项任务以及不同数据集规模上实现了最先进的性能,代表了一种现代化的医学图像分割深度架构。我们的代码已在以下网址公开:https://github.com/MIC-DKFZ/MedNeXt。