Cell image segmentation is usually implemented using fully supervised deep learning methods, which heavily rely on extensive annotated training data. Yet, due to the complexity of cell morphology and the requirement for specialized knowledge, pixel-level annotation of cell images has become a highly labor-intensive task. To address the above problems, we propose an active learning framework for cell segmentation using bounding box annotations, which greatly reduces the data annotation cost of cell segmentation algorithms. First, we generate a box-supervised learning method (denoted as YOLO-SAM) by combining the YOLOv8 detector with the Segment Anything Model (SAM), which effectively reduces the complexity of data annotation. Furthermore, it is integrated into an active learning framework that employs the MC DropBlock method to train the segmentation model with fewer box-annotated samples. Extensive experiments demonstrate that our model saves more than ninety percent of data annotation time compared to mask-supervised deep learning methods.
翻译:细胞图像分割通常采用全监督深度学习方法实现,这类方法严重依赖大量标注训练数据。然而,由于细胞形态的复杂性以及专业知识的必要性,细胞图像的像素级标注已成为一项高度劳动密集型任务。为解决上述问题,我们提出了一种基于边界框标注的主动学习框架,用于细胞分割,该框架极大降低了细胞分割算法的数据标注成本。首先,通过将YOLOv8检测器与Segment Anything Model(SAM)相结合,我们生成了一种箱监督学习方法(记为YOLO-SAM),有效降低了数据标注的复杂度。进一步地,该方法被集成到一个主动学习框架中,该框架采用MC DropBlock方法,利用更少的箱标注样本训练分割模型。大量实验表明,与掩码监督深度学习方法相比,我们的模型节省了超过百分之九十的数据标注时间。