Neural fields are an emerging paradigm that represent data as continuous functions parameterized by neural networks. Despite many advantages, neural fields often have a high training cost, which prevents a broader adoption. In this paper, we focus on a popular family of neural fields, called sinusoidal neural fields (SNFs), and study how it should be initialized to maximize the training speed. We find that the standard initialization scheme for SNFs -- designed based on the signal propagation principle -- is suboptimal. In particular, we show that by simply multiplying each weight (except for the last layer) by a constant, we can accelerate SNF training by 10$\times$. This method, coined $\textit{weight scaling}$, consistently provides a significant speedup over various data domains, allowing the SNFs to train faster than more recently proposed architectures. To understand why the weight scaling works well, we conduct extensive theoretical and empirical analyses which reveal that the weight scaling not only resolves the spectral bias quite effectively but also enjoys a well-conditioned optimization trajectory.
翻译:神经场是一种新兴范式,通过神经网络参数化的连续函数来表示数据。尽管具有诸多优势,神经场通常存在训练成本过高的问题,阻碍了其更广泛的应用。本文聚焦于一类流行的神经场——正弦神经场(SNFs),研究如何通过初始化策略最大化训练速度。我们发现基于信号传播原理设计的标准SNF初始化方案并非最优。具体而言,我们证明仅需将除最后一层外的所有权重乘以常数,即可实现10倍的SNF训练加速。这种称为“权重缩放”的方法在不同数据域中均能提供显著的加速效果,使得SNFs的训练速度甚至超过近期提出的其他架构。为探究权重缩放的有效性,我们进行了系统的理论与实证分析,结果表明该方法不仅能有效缓解频谱偏差,还能获得良好条件的优化轨迹。