Visualizing the large-scale datasets output by HPC resources presents a difficult challenge, as the memory and compute power required become prohibitively expensive for end user systems. Novel view synthesis techniques can address this by producing a small, interactive model of the data, requiring only a set of training images to learn from. While these models allow accessible visualization of large data and complex scenes, they do not provide the interactions needed for scientific volumes, as they do not support interactive selection of transfer functions and lighting parameters. To address this, we introduce Volume Encoding Gaussians (VEG), a 3D Gaussian-based representation for volume visualization that supports arbitrary color and opacity mappings. Unlike prior 3D Gaussian Splatting (3DGS) methods that store color and opacity for each Gaussian, VEG decouple the visual appearance from the data representation by encoding only scalar values, enabling transfer function-agnostic rendering of 3DGS models. To ensure complete scalar field coverage, we introduce an opacity-guided training strategy, using differentiable rendering with multiple transfer functions to optimize our data representation. This allows VEG to preserve fine features across the full scalar range of a dataset while remaining independent of any specific transfer function. Across a diverse set of volume datasets, we demonstrate that our method outperforms the state-of-the-art on transfer functions unseen during training, while requiring a fraction of the memory and training time.
翻译:高性能计算资源输出的大规模数据集可视化面临严峻挑战,因为所需内存和计算能力对终端用户系统而言过于昂贵。新颖视角合成技术可通过生成小型交互式数据模型来解决此问题,仅需一组训练图像即可学习。虽然这些模型实现了大规模数据和复杂场景的可访问可视化,但它们无法满足科学体数据所需的交互需求,因为不支持传递函数和光照参数的交互式选择。为此,我们提出了体素编码高斯(VEG)——一种基于三维高斯的体可视化表示方法,支持任意颜色与不透明度映射。与先前存储每个高斯颜色和不透明度的三维高斯泼溅(3DGS)方法不同,VEG通过仅编码标量值将视觉外观与数据表示解耦,实现了3DGS模型与传递函数无关的渲染。为确保标量场的完整覆盖,我们引入了不透明度引导的训练策略,采用多传递函数的可微分渲染来优化数据表示。这使得VEG能够在保持独立于特定传递函数的同时,完整保留数据集全标量范围内的精细特征。在多样化体数据集上的实验表明,我们的方法在训练阶段未见过的传递函数上优于现有最优方法,且仅需少量内存和训练时间。