Implicit Neural Representations (INRs) have emerged as promising surrogates for large 3D scientific simulations due to their ability to continuously model spatial and conditional fields, yet they face a critical fidelity-speed dilemma: deep MLPs suffer from high inference cost, while efficient embedding-based models lack sufficient expressiveness. To resolve this, we propose the Decoupled Representation Refinement (DRR) architectural paradigm. DRR leverages a deep refiner network, alongside non-parametric transformations, in a one-time offline process to encode rich representations into a compact and efficient embedding structure. This approach decouples slow neural networks with high representational capacity from the fast inference path. We introduce DRR-Net, a simple network that validates this paradigm, and a novel data augmentation strategy, Variational Pairs (VP) for improving INRs under complex tasks like high-dimensional surrogate modeling. Experiments on several ensemble simulation datasets demonstrate that our approach achieves state-of-the-art fidelity, while being up to 27$\times$ faster at inference than high-fidelity baselines and remaining competitive with the fastest models. The DRR paradigm offers an effective strategy for building powerful and practical neural field surrogates and \rev{INRs in broader applications}, with a minimal compromise between speed and quality.
翻译:隐式神经表示(INRs)因其能够连续建模空间和条件场而成为大型三维科学模拟的有前景的替代方案,但它们面临一个关键的保真度-速度困境:深度多层感知机(MLPs)推理成本高昂,而高效的基于嵌入的模型则缺乏足够的表达能力。为解决此问题,我们提出了解耦表示精炼(DRR)架构范式。DRR利用一个深度精炼器网络,结合非参数变换,在一次离线处理过程中将丰富的表示编码到一个紧凑且高效的嵌入结构中。这种方法将具有高表示能力的慢速神经网络与快速推理路径解耦。我们介绍了DRR-Net,一个验证此范式的简单网络,以及一种新颖的数据增强策略——变分对(VP),用于在高维替代建模等复杂任务下改进INRs。在多个集合模拟数据集上的实验表明,我们的方法实现了最先进的保真度,同时推理速度比高保真度基线快高达27倍,并与最快的模型保持竞争力。DRR范式为构建强大且实用的神经场替代模型以及在更广泛应用中的INRs提供了一种有效策略,在速度与质量之间实现了最小化的妥协。