Clinically deployed deep learning-based segmentation models are known to fail on data outside of their training distributions. While clinicians review the segmentations, these models tend to perform well in most instances, which could exacerbate automation bias. Therefore, detecting out-of-distribution images at inference is critical to warn the clinicians that the model likely failed. This work applied the Mahalanobis distance (MD) post hoc to the bottleneck features of four Swin UNETR and nnU-net models that segmented the liver on T1-weighted magnetic resonance imaging and computed tomography. By reducing the dimensions of the bottleneck features with either principal component analysis or uniform manifold approximation and projection, images the models failed on were detected with high performance and minimal computational load. In addition, this work explored a non-parametric alternative to the MD, a k-th nearest neighbors distance (KNN). KNN drastically improved scalability and performance over MD when both were applied to raw and average-pooled bottleneck features.
翻译:临床部署的基于深度学习的分割模型在处理训练分布之外的数据时往往失效。尽管临床医师会对分割结果进行审核,但这些模型在多数情况下表现良好,这可能加剧自动化偏差。因此,在推理阶段检测分布外图像对于警示临床医师模型可能失效至关重要。本研究将马氏距离(MD)后验应用于四个Swin UNETR和nnU-net模型的瓶颈特征,这些模型用于T1加权磁共振成像和计算机断层扫描中的肝脏分割。通过主成分分析或均匀流形逼近与投影对瓶颈特征进行降维处理,能够以高性能和极低计算负荷检测出模型失效的图像。此外,本研究探索了马氏距离的非参数替代方案——k阶最近邻距离(KNN)。当两者应用于原始及平均池化瓶颈特征时,KNN在可扩展性和性能方面较MD有显著提升。