To enable context-aware computer assistance in the operating room of the future, cognitive systems need to understand automatically which surgical phase is being performed by the medical team. The primary source of information for surgical phase recognition is typically video, which presents two challenges: extracting meaningful features from the video stream and effectively modeling temporal information in the sequence of visual features. For temporal modeling, attention mechanisms have gained popularity due to their ability to capture long-range dependencies. In this paper, we explore design choices for attention in existing temporal models for surgical phase recognition and propose a novel approach that uses attention more effectively and does not require hand-crafted constraints: TUNeS, an efficient and simple temporal model that incorporates self-attention at the core of a convolutional U-Net structure. In addition, we propose to train the feature extractor, a standard CNN, together with an LSTM on preferably long video segments, i.e., with long temporal context. In our experiments, almost all temporal models performed better on top of feature extractors that were trained with longer temporal context. On these contextualized features, TUNeS achieves state-of-the-art results on the Cholec80 dataset. This study offers new insights on how to use attention mechanisms to build accurate and efficient temporal models for surgical phase recognition. Implementing automatic surgical phase recognition is essential to automate the analysis and optimization of surgical workflows and to enable context-aware computer assistance during surgery, thus ultimately improving patient care.
翻译:为实现未来手术室中情境感知的计算机辅助,认知系统需要自动理解医疗团队正在执行的手术阶段。手术阶段识别的主要信息来源通常是视频,这带来两个挑战:从视频流中提取有意义的特征,以及对视觉特征序列中的时序信息进行有效建模。在时序建模方面,注意力机制因其能够捕捉长程依赖关系而日益受到关注。本文探讨了现有手术阶段识别时序模型中注意力机制的设计选择,并提出一种能更有效利用注意力且无需手工约束的新方法:TUNeS——一种高效简洁的时序模型,将自注意力机制嵌入卷积U-Net结构的核心。此外,我们提出将标准CNN特征提取器与LSTM在尽可能长的视频片段(即具有长时序上下文)上联合训练。实验表明,几乎所有时序模型在使用经过长时序上下文训练的特征提取器时均表现更优。在此类情境化特征基础上,TUNeS在Cholec80数据集上取得了最先进的结果。本研究为如何利用注意力机制构建准确高效的手术阶段识别时序模型提供了新见解。实现自动手术阶段识别对于自动化分析优化手术工作流程、实现术中情境感知计算机辅助至关重要,最终将提升患者诊疗水平。