Understanding a surgical scene is crucial for computer-assisted surgery systems to provide any intelligent assistance functionality. One way of achieving this scene understanding is via scene segmentation, where every pixel of a frame is classified and therefore identifies the visible structures and tissues. Progress on fully segmenting surgical scenes has been made using machine learning. However, such models require large amounts of annotated training data, containing examples of all relevant object classes. Such fully annotated datasets are hard to create, as every pixel in a frame needs to be annotated by medical experts and, therefore, are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, which provide complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets. Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of binary annotations, as we cannot tell if they contain a class not annotated but predicted by the model. We evaluate our method by training a DeepLabV3 on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce confusion between stomach and colon by 24%. Our results demonstrate the feasibility of training a model on multiple datasets. This paves the way for future work further alleviating the need for one large, fully segmented datasets.
翻译:理解手术场景对于计算机辅助手术系统提供任何智能辅助功能至关重要。实现场景理解的一种方式是通过场景分割,即将每一帧中的每个像素进行分类,从而识别可见的结构和组织。利用机器学习在全手术场景分割方面已取得进展,但这类模型需要大量标注的训练数据,包含所有相关目标类别的示例。然而,由于每一帧中的每个像素都需由医学专家标注,完全标注的数据集难以创建且鲜少可用。本文提出一种方法,将多个提供互补标注的部分标注数据集整合至一个模型中,从而实现更优的场景分割并利用多个现有数据集。该方法通过利用互斥属性最大化信息,以整合具有互补标签的可用数据。具体而言,我们建议将其他类别的正样本标注作为负样本使用,并排除二值标注中的背景像素(因无法判别其中是否包含模型预测但未标注的类别)。我们通过使用公开的德累斯顿外科解剖数据集(该数据集提供多个二元分割解剖结构的子集)训练DeepLabV3来评估本方法。该方法成功将6个类别整合至一个模型中,相较单独训练各分类模型的集成方法,整体Dice分数提升4.4%。通过引入多类别信息,胃与结肠之间的混淆率降低了24%。实验结果证明了在多个数据集上训练模型的可行性,为未来缓解对大型全分割数据集的需求奠定了基础。