We present SAM4EM, a novel approach for 3D segmentation of complex neural structures in electron microscopy (EM) data by leveraging the Segment Anything Model (SAM) alongside advanced fine-tuning strategies. Our contributions include the development of a prompt-free adapter for SAM using two stage mask decoding to automatically generate prompt embeddings, a dual-stage fine-tuning method based on Low-Rank Adaptation (LoRA) for enhancing segmentation with limited annotated data, and a 3D memory attention mechanism to ensure segmentation consistency across 3D stacks. We further release a unique benchmark dataset for the segmentation of astrocytic processes and synapses. We evaluated our method on challenging neuroscience segmentation benchmarks, specifically targeting mitochondria, glia, and synapses, with significant accuracy improvements over state-of-the-art (SOTA) methods, including recent SAM-based adapters developed for the medical domain and other vision transformer-based approaches. Experimental results indicate that our approach outperforms existing solutions in the segmentation of complex processes like glia and post-synaptic densities. Our code and models are available at https://github.com/Uzshah/SAM4EM.
翻译:我们提出了SAM4EM,一种通过结合Segment Anything Model(SAM)与先进微调策略来实现电子显微镜(EM)数据中复杂神经结构三维分割的新方法。我们的贡献包括:开发了一种基于两阶段掩码解码的SAM无提示适配器,用于自动生成提示嵌入;提出了一种基于低秩自适应(LoRA)的双阶段微调方法,以在有限标注数据下增强分割性能;并引入了一种三维记忆注意力机制,以确保跨三维堆栈的分割一致性。此外,我们发布了一个用于星形胶质细胞突起与突触分割的独特基准数据集。我们在具有挑战性的神经科学分割基准上评估了我们的方法,特别针对线粒体、胶质细胞和突触,相较于现有最先进(SOTA)方法(包括近期为医学领域开发的基于SAM的适配器及其他基于视觉Transformer的方法)取得了显著的精度提升。实验结果表明,我们的方法在胶质细胞和突触后致密区等复杂结构的分割上优于现有解决方案。我们的代码与模型已发布于https://github.com/Uzshah/SAM4EM。