Interactive photorealistic rendering of 3D anatomy is used in medical education to explain the structure of the human body. It is currently restricted to frontal teaching scenarios, where even with a powerful GPU and high-speed access to a large storage device where the data set is hosted, interactive demonstrations can hardly be achieved. We present the use of novel view synthesis via compressed 3D Gaussian Splatting (3DGS) to overcome this restriction, and to even enable students to perform cinematic anatomy on lightweight and mobile devices. Our proposed pipeline first finds a set of camera poses that captures all potentially seen structures in the data. High-quality images are then generated with path tracing and converted into a compact 3DGS representation, consuming < 70 MB even for data sets of multiple GBs. This allows for real-time photorealistic novel view synthesis that recovers structures up to the voxel resolution and is almost indistinguishable from the path-traced images
翻译:三维解剖学的交互式照片级真实感渲染在医学教育中用于解释人体结构。目前该技术仅限于面对面的教学场景,即使配备强大的GPU并高速访问存储数据集的大型存储设备,仍难以实现交互式演示。我们提出通过压缩三维高斯溅射(3DGS)的新视角合成技术来突破这一限制,甚至使学生能够在轻量级移动设备上进行电影级解剖学操作。我们提出的流程首先获取一组能覆盖数据中所有潜在可见结构的相机位姿,随后通过路径追踪生成高质量图像,并将其转换为紧凑的3DGS表示——即使对于数GB的数据集,其存储占用也低于70 MB。这使得实时照片级真实感新视角合成成为可能,该技术能还原体素级分辨率的组织结构,其效果与路径追踪图像几乎无法区分。