Weight Space Learning (WSL), which frames neural network weights as a data modality, is an emerging field with potential for tasks like meta-learning or transfer learning. Particularly, Implicit Neural Representations (INRs) provide a convenient testbed, where each set of weights determines the corresponding individual data sample as a mapping from coordinates to contextual values. So far, a precise theoretical explanation for the mechanism of encoding semantics of data into network weights is still missing. In this work, we deploy the Implicit Function Theorem (IFT) to establish a rigorous mapping between the data space and its latent weight representation space. We analyze a framework that maps instance-specific embeddings to INR weights via a shared hypernetwork, achieving performance competitive with existing baselines on downstream classification tasks across 2D and 3D datasets. These findings offer a theoretical lens for future investigations into network weights.
翻译:权重空间学习(WSL)将神经网络权重视为一种数据模态,是一个新兴领域,在元学习或迁移学习等任务中具有潜力。特别是,隐式神经表示(INR)提供了一个便捷的测试平台,其中每一组权重通过从坐标到上下文值的映射,决定了对应的单个数据样本。迄今为止,对于将数据语义编码到网络权重中的机制,仍缺乏精确的理论解释。在本工作中,我们运用隐函数定理(IFT)在数据空间与其潜在权重表示空间之间建立严格的映射关系。我们分析了一个通过共享超网络将实例特定嵌入映射到INR权重的框架,该框架在二维和三维数据集的下游分类任务中取得了与现有基线方法相竞争的性能。这些发现为未来研究网络权重提供了一个理论视角。