We present a complete mechanistic description of the algorithm learned by a minimal non-linear sparse data autoencoder in the limit of large input dimension. The model, originally presented in arXiv:2209.10652, compresses sparse data vectors through a linear layer and decompresses using another linear layer followed by a ReLU activation. We notice that when the data is permutation symmetric (no input feature is privileged) large models reliably learn an algorithm that is sensitive to individual weights only through their large-scale statistics. For these models, the loss function becomes analytically tractable. Using this understanding, we give the explicit scalings of the loss at high sparsity, and show that the model is near-optimal among recently proposed architectures. In particular, changing or adding to the activation function any elementwise or filtering operation can at best improve the model's performance by a constant factor. Finally, we forward-engineer a model with the requisite symmetries and show that its loss precisely matches that of the trained models. Unlike the trained model weights, the low randomness in the artificial weights results in miraculous fractal structures resembling a Persian rug, to which the algorithm is oblivious. Our work contributes to neural network interpretability by introducing techniques for understanding the structure of autoencoders. Code to reproduce our results can be found at https://github.com/KfirD/PersianRug .
翻译:我们针对输入维度较大时最小非线性稀疏数据自编码器所学算法,提供了完整的机制性描述。该模型最初发表于arXiv:2209.10652,通过线性层压缩稀疏数据向量,并使用另一个线性层配合ReLU激活函数进行解压缩。我们注意到,当数据具有置换对称性(无输入特征具有特权)时,大型模型可靠地学习到仅通过权重的大规模统计量对单个权重敏感的算法。对于这些模型,损失函数变得可解析处理。基于这一理解,我们给出了高稀疏度下损失的显式标度关系,并证明该模型在近期提出的架构中接近最优。特别地,对激活函数进行任何逐元素或滤波操作的修改或补充,最多只能将模型性能提升常数倍。最后,我们通过前向工程构建了具有必要对称性的模型,并证明其损失与训练模型完全匹配。与训练得到的权重不同,人工权重的低随机性产生了类似波斯地毯的奇妙分形结构,而算法对此并无感知。我们的工作通过引入理解自编码器结构的技术,为神经网络可解释性研究作出贡献。重现结果的代码可在https://github.com/KfirD/PersianRug获取。