The accurate segmentation of guidewires in interventional cardiac fluoroscopy videos is crucial for computer-aided navigation tasks. Although deep learning methods have demonstrated high accuracy and robustness in wire segmentation, they require substantial annotated datasets for generalizability, underscoring the need for extensive labeled data to enhance model performance. To address this challenge, we propose the Segmentation-guided Frame-consistency Video Diffusion Model (SF-VD) to generate large collections of labeled fluoroscopy videos, augmenting the training data for wire segmentation networks. SF-VD leverages videos with limited annotations by independently modeling scene distribution and motion distribution. It first samples the scene distribution by generating 2D fluoroscopy images with wires positioned according to a specified input mask, and then samples the motion distribution by progressively generating subsequent frames, ensuring frame-to-frame coherence through a frame-consistency strategy. A segmentation-guided mechanism further refines the process by adjusting wire contrast, ensuring a diverse range of visibility in the synthesized image. Evaluation on a fluoroscopy dataset confirms the superior quality of the generated videos and shows significant improvements in guidewire segmentation.
翻译:在介入性心脏荧光透视视频中准确分割导丝对于计算机辅助导航任务至关重要。尽管深度学习方法在导丝分割中已展现出高精度和鲁棒性,但其泛化能力需要大量标注数据集,这突显了需要大量标注数据以提升模型性能。为应对这一挑战,我们提出了分割引导的帧一致性视频扩散模型(SF-VD),用于生成大规模的标注荧光透视视频,从而增强导丝分割网络的训练数据。SF-VD通过独立建模场景分布和运动分布,有效利用标注有限的视频。它首先通过生成二维荧光透视图像来采样场景分布,其中导丝位置根据指定的输入掩码确定;随后通过逐步生成后续帧来采样运动分布,并通过帧一致性策略确保帧间连贯性。分割引导机制通过调整导丝对比度进一步优化生成过程,确保合成图像中可见度的多样性。在荧光透视数据集上的评估证实了生成视频的卓越质量,并显示出导丝分割性能的显著提升。