Neural Metamorphosis (NeuMeta) is a recent paradigm for generating neural networks of varying width and depth. Based on Implicit Neural Representation (INR), NeuMeta learns a continuous weight manifold, enabling the direct generation of compressed models, including those with configurations not seen during training. While promising, the original formulation of NeuMeta proves effective only for the final layers of the undelying model, limiting its broader applicability. In this work, we propose a training algorithm that extends the capabilities of NeuMeta to enable full-network metamorphosis with minimal accuracy degradation. Our approach follows a structured recipe comprising block-wise incremental training, INR initialization, and strategies for replacing batch normalization. The resulting metamorphic networks maintain competitive accuracy across a wide range of compression ratios, offering a scalable solution for adaptable and efficient deployment of deep models. The code is available at: https://github.com/TSommariva/HTTY_NeuMeta.
翻译:神经变形(NeuMeta)是近期提出的一种用于生成不同宽度和深度神经网络的范式。基于隐式神经表示(INR),NeuMeta学习一个连续的权重流形,从而能够直接生成压缩模型,包括那些在训练过程中未见过的配置。尽管前景广阔,但NeuMeta的原始公式被证明仅对底层模型的最后几层有效,这限制了其更广泛的应用。在本工作中,我们提出了一种训练算法,扩展了NeuMeta的能力,使其能够实现全网络变形,同时将精度损失降至最低。我们的方法遵循一个结构化方案,包括逐块增量训练、INR初始化以及替换批归一化的策略。由此产生的可变形网络在广泛的压缩比范围内保持了有竞争力的精度,为深度模型的适应性和高效部署提供了一个可扩展的解决方案。代码发布于:https://github.com/TSommariva/HTTY_NeuMeta。