We present a parallel compositing algorithm for Volumetric Depth Images (VDIs) of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. VDIs are view-dependent piecewise constant representations of volume data that offer a potential solution. They are more compact and less expensive to render than the original data. So far, however, there is no method for generating VDIs from distributed data. We propose an algorithm that enables this by sort-last parallel generation and compositing of VDIs with automatically chosen content-adaptive parameters. The resulting composited VDI can then be streamed for remote display, providing responsive visualization of large, distributed volume data.
翻译:本文提出了一种针对大规模三维体数据的体素深度图像并行合成算法。大规模分布式体数据在数值模拟与实验中已常规产生,然而实现流畅、交互式帧率的可视化仍具挑战。体素深度图像作为体数据的一种视点依赖分段常量表示,提供了潜在的解决方案。相较于原始数据,其表示更紧凑且渲染开销更低。然而迄今为止,尚无从分布式数据生成体素深度图像的有效方法。我们提出一种算法,通过采用自动选择的内容自适应参数,实现体素深度图像的排序无关并行生成与合成。最终合成的体素深度图像可流式传输至远程终端进行显示,从而为大规模分布式体数据提供响应式可视化。