Wound care is often challenged by the economic and logistical burdens that consistently afflict patients and hospitals worldwide. In recent decades, healthcare professionals have sought support from computer vision and machine learning algorithms. In particular, wound segmentation has gained interest due to its ability to provide professionals with fast, automatic tissue assessment from standard RGB images. Some approaches have extended segmentation to 3D, enabling more complete and precise healing progress tracking. However, inferring multi-view consistent 3D structures from 2D images remains a challenge. In this paper, we evaluate WoundNeRF, a NeRF SDF-based method for estimating robust wound segmentations from automatically generated annotations. We demonstrate the potential of this paradigm in recovering accurate segmentations by comparing it against state-of-the-art Vision Transformer networks and conventional rasterisation-based algorithms. The code will be released to facilitate further development in this promising paradigm.
翻译:伤口护理常因全球范围内持续困扰患者和医院的经济与后勤负担而面临挑战。近几十年来,医疗专业人员开始寻求计算机视觉与机器学习算法的支持。特别是伤口分割技术因其能够通过标准RGB图像为专业人员提供快速、自动的组织评估而备受关注。部分方法已将分割扩展至三维领域,实现了更完整、更精确的愈合进程追踪。然而,从二维图像推断多视角一致的三维结构仍具挑战性。本文评估了WoundNeRF——一种基于NeRF SDF的方法,用于从自动生成的标注中估计鲁棒的伤口分割结果。通过将其与最先进的Vision Transformer网络及传统基于栅格化的算法进行比较,我们证明了该范式在恢复精确分割结果方面的潜力。相关代码将公开发布,以促进这一前景广阔范式的进一步发展。