Conventional deep learning models deal with images one-by-one, requiring costly and time-consuming expert labeling in the field of medical imaging, and domain-specific restriction limits model generalizability. Visual in-context learning (ICL) is a new and exciting area of research in computer vision. Unlike conventional deep learning, ICL emphasizes the model's ability to adapt to new tasks based on given examples quickly. Inspired by MAE-VQGAN, we proposed a new simple visual ICL method called SimICL, combining visual ICL pairing images with masked image modeling (MIM) designed for self-supervised learning. We validated our method on bony structures segmentation in a wrist ultrasound (US) dataset with limited annotations, where the clinical objective was to segment bony structures to help with further fracture detection. We used a test set containing 3822 images from 18 patients for bony region segmentation. SimICL achieved an remarkably high Dice coeffient (DC) of 0.96 and Jaccard Index (IoU) of 0.92, surpassing state-of-the-art segmentation and visual ICL models (a maximum DC 0.86 and IoU 0.76), with SimICL DC and IoU increasing up to 0.10 and 0.16. This remarkably high agreement with limited manual annotations indicates SimICL could be used for training AI models even on small US datasets. This could dramatically decrease the human expert time required for image labeling compared to conventional approaches, and enhance the real-world use of AI assistance in US image analysis.
翻译:传统深度学习模型逐一处理图像,在医学影像领域需要昂贵且费时的专家标注,且领域特异性限制降低了模型的泛化能力。视觉上下文学习(ICL)是计算机视觉中一个新兴且激动人心的研究方向。与传统深度学习不同,ICL强调模型基于给定示例快速适应新任务的能力。受MAE-VQGAN启发,我们提出了一种名为SimICL的新型简洁视觉ICL方法,该方法将视觉ICL配对图像与专为自监督学习设计的掩码图像建模(MIM)相结合。我们在一个标注有限的腕部超声(US)数据集上验证了该方法在骨骼结构分割中的有效性,其临床目标是通过骨骼结构分割辅助进一步的骨折检测。我们使用包含18名患者3822张图像的测试集进行骨骼区域分割。SimICL实现了高达0.96的Dice系数(DC)和0.92的Jaccard指数(IoU),超越了当前最优的分割模型和视觉ICL模型(最高DC为0.86,IoU为0.76),其中SimICL的DC和IoU分别提升了0.10和0.16。这种在有限手动标注下取得的显著高一致性表明,SimICL可应用于小样本US数据集的AI模型训练。与传统方法相比,该技术可大幅减少图像标注所需的人类专家时间,并增强AI辅助在US图像分析中的实际应用价值。