We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.
翻译:摘要:我们提出了一种用于交互式构建与渲染超大分子场景的新方法,该方法能够以原子级细节呈现多个生物细胞。该方法专为基于给定构建规则程序化生成的场景而设计。大规模场景的渲染通常需要整个场景常驻内存,或采用外存管理技术将数据分层加载至内存层级作为渲染循环的一部分。我们摒弃了外存管理方案,转而提出按需动态程序化生成场景。核心思想在于一种基于位置与视角的程序化场景构建策略:在任意时刻,GPU内存中仅需保留相机周围的部分原子级场景。通过覆盖整个场景的网格,我们将原子级细节填充至均匀空间划分中。绝大多数网格单元无需填充几何体,仅填充相机可能观察到的单元。原子级细节在计算着色器中生成,其表示形式与加速数据结构相连接,以支持现代GPU的硬件光线追踪。对于远距离、从给定视角无法感知原子细节的物体,则采用三角网格映射无缝纹理的方式表示,该纹理由原子细节几何体的渲染结果生成。该算法包含构建-计算流水线与渲染流水线两条管线,二者协同工作,可渲染原子数远超GPU内存容量(包含数万亿原子)的分子场景。我们通过多个SARS-CoV-2模型与红细胞模型验证了该技术的有效性。