Automated patient positioning plays an important role in optimizing scanning procedure and improving patient throughput. Leveraging depth information captured by RGB-D cameras presents a promising approach for estimating internal organ positions, thereby enabling more accurate and efficient positioning. In this work, we propose a learning-based framework that directly predicts the 3D locations and shapes of multiple internal organs from single 2D depth images of the body surface. Utilizing a large-scale dataset of full-body MRI scans, we synthesize depth images paired with corresponding anatomical segmentations to train a unified convolutional neural network architecture. Our method accurately localizes a diverse set of anatomical structures, including bones and soft tissues, without requiring explicit surface reconstruction. Experimental results demonstrate the potential of integrating depth sensors into radiology workflows to streamline scanning procedures and enhance patient experience through automated patient positioning.
翻译:自动患者定位在优化扫描流程与提升患者吞吐量方面具有重要作用。利用RGB-D相机捕获的深度信息为估计内部器官位置提供了一种前景广阔的方法,从而实现更精准高效的定位。本研究提出一种基于学习的框架,可直接从单张体表二维深度图像预测多个内部器官的三维位置与形状。通过利用大规模全身MRI扫描数据集,我们合成了与相应解剖分割配对的深度图像,用以训练统一的卷积神经网络架构。该方法无需显式表面重建,即可准确定位包括骨骼与软组织在内的多种解剖结构。实验结果证明了将深度传感器集成到放射学工作流程中的潜力,可通过自动患者定位简化扫描程序并改善患者体验。