Implicit neural representation (INR) has emerged as a promising solution for encoding volumetric data, offering continuous representations and seamless compatibility with the volume rendering pipeline. However, optimizing an INR network from randomly initialized parameters for each new volume is computationally inefficient, especially for large-scale time-varying or ensemble volumetric datasets where volumes share similar structural patterns but require independent training. To close this gap, we propose Meta-INR, a pretraining strategy adapted from meta-learning algorithms to learn initial INR parameters from partial observation of a volumetric dataset. Compared to training an INR from scratch, the learned initial parameters provide a strong prior that enhances INR generalizability, allowing significantly faster convergence with just a few gradient updates when adapting to a new volume and better interpretability when analyzing the parameters of the adapted INRs. We demonstrate that Meta-INR can effectively extract high-quality generalizable features that help encode unseen similar volume data across diverse datasets. Furthermore, we highlight its utility in tasks such as simulation parameter analysis and representative timestep selection. The code is available at https://github.com/spacefarers/MetaINR.
翻译:隐式神经表示(INR)已成为体数据编码的一种有前景的解决方案,它提供连续表示并能与体绘制流程无缝兼容。然而,为每个新体数据从随机初始化参数开始优化INR网络在计算上是低效的,特别是对于大规模时变或集合体数据集,其中各体数据共享相似的结构模式却需要独立训练。为弥补这一不足,我们提出Meta-INR,一种从元学习算法改编的预训练策略,旨在通过部分观测体数据集来学习初始INR参数。与从头开始训练INR相比,学习到的初始参数提供了强大的先验知识,增强了INR的泛化能力,使其在适应新体数据时仅需少量梯度更新即可实现显著更快的收敛,并在分析适应后INR的参数时具有更好的可解释性。我们证明,Meta-INR能够有效提取高质量的可泛化特征,有助于跨不同数据集编码未见过的相似体数据。此外,我们强调了其在仿真参数分析和代表性时间步选择等任务中的实用性。代码可在 https://github.com/spacefarers/MetaINR 获取。