We propose EVOlutionary Selector (EVOS), an efficient training paradigm for accelerating Implicit Neural Representation (INR). Unlike conventional INR training that feeds all samples through the neural network in each iteration, our approach restricts training to strategically selected points, reducing computational overhead by eliminating redundant forward passes. Specifically, we treat each sample as an individual in an evolutionary process, where only those fittest ones survive and merit inclusion in training, adaptively evolving with the neural network dynamics. While this is conceptually similar to Evolutionary Algorithms, their distinct objectives (selection for acceleration vs. iterative solution optimization) require a fundamental redefinition of evolutionary mechanisms for our context. In response, we design sparse fitness evaluation, frequency-guided crossover, and augmented unbiased mutation to comprise EVOS. These components respectively guide sample selection with reduced computational cost, enhance performance through frequency-domain balance, and mitigate selection bias from cached evaluation. Extensive experiments demonstrate that our method achieves approximately 48%-66% reduction in training time while ensuring superior convergence without additional cost, establishing state-of-the-art acceleration among recent sampling-based strategies.
翻译:本文提出进化选择器(EVOS),一种用于加速隐式神经表示(INR)训练的高效范式。与传统INR训练在每次迭代中将所有样本输入神经网络不同,我们的方法将训练限制在策略性选择的采样点上,通过消除冗余前向传播来降低计算开销。具体而言,我们将每个样本视为进化过程中的个体,仅适应度最高的样本得以保留并参与训练,且会随神经网络动态自适应演化。尽管这在概念上类似于进化算法,但两者目标存在本质差异(加速选择 vs. 迭代解优化),因此需要针对本文场景重新定义进化机制。为此,我们设计了稀疏适应度评估、频率引导交叉与增强无偏突变三大组件构成EVOS。这些组件分别以更低计算成本指导样本选择、通过频域平衡提升性能,并缓解缓存评估带来的选择偏差。大量实验表明,该方法在保证收敛性能无额外损耗的前提下,可减少约48%-66%的训练时间,在近期基于采样的加速策略中达到了最优性能。