Understanding a surgical scene is crucial for computer-assisted surgery systems to provide any intelligent assistance functionality. One way of achieving this scene understanding is via scene segmentation, where every pixel of a frame is classified and therefore identifies the visible structures and tissues. Progress on fully segmenting surgical scenes has been made using machine learning. However, such models require large amounts of annotated training data, containing examples of all relevant object classes. Such fully annotated datasets are hard to create, as every pixel in a frame needs to be annotated by medical experts and, therefore, are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, which provide complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets. Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of binary annotations, as we cannot tell if they contain a class not annotated but predicted by the model. We evaluate our method by training a DeepLabV3 on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce confusion between stomach and colon by 24%. Our results demonstrate the feasibility of training a model on multiple datasets. This paves the way for future work further alleviating the need for one large, fully segmented datasets.
翻译:理解手术场景对于计算机辅助手术系统提供智能辅助功能至关重要。实现场景理解的一种方法是通过场景分割,即对每一帧图像的每个像素进行分类,从而识别可见的组织结构。利用机器学习技术,手术场景的完全分割已取得进展。然而,此类模型需要大量标注训练数据,其中需包含所有相关目标类别的样本。由于医学专家需逐帧标注每个像素,完全标注的数据集难以创建,因而鲜有可用。本文提出一种方法,将多个提供互补标注的部分标注数据集整合至一个模型中,从而实现更优的场景分割,并充分利用多个现有数据集。该方法旨在通过利用互斥属性最大化信息利用,整合带有互补标签的可用数据。具体而言,我们提出将其他类别的正样本标注用作负样本,并排除二元标注中的背景像素——因为这些像素可能包含未标注但模型需预测的类别。我们利用公开的德累斯顿手术解剖数据集(Dresden Surgical Anatomy Dataset)训练DeepLabV3模型进行方法评估,该数据集包含多个二元分割解剖结构的子集。本方法成功将6个类别整合至单个模型,整体Dice评分较独立训练各类别的集成模型提升4.4%。通过引入多类别信息,胃与结肠之间的混淆率降低24%。实验结果证明了在多个数据集上训练模型的可行性,为未来减少对单一大型完全分割数据集依赖的研究奠定了基础。