Normalized difference indices have been a staple in remote sensing for decades. They stay reliable under lighting changes produce bounded values and connect well to biophysical signals. Even so, they are usually treated as a fixed pre processing step with coefficients set to one, which limits how well they can adapt to a specific learning task. In this study, we introduce the Normalized Difference Layer that is a differentiable neural network module. The proposed method keeps the classical idea but learns the band coefficients from data. We present a complete mathematical framework for integrating this layer into deep learning architectures that uses softplus reparameterization to ensure positive coefficients and bounded denominators. We describe forward and backward pass algorithms enabling end to end training through backpropagation. This approach preserves the key benefits of normalized differences, namely illumination invariance and outputs bounded to $[-1,1]$ while allowing gradient descent to discover task specific band weightings. We extend the method to work with signed inputs, so the layer can be stacked inside larger architectures. Experiments show that models using this layer reach similar classification accuracy to standard multilayer perceptrons while using about 75\% fewer parameters. They also handle multiplicative noise well, at 10\% noise accuracy drops only 0.17\% versus 3.03\% for baseline MLPs. The learned coefficient patterns stay consistent across different depths.
翻译:归一化差分指数在遥感领域已沿用数十年。其在光照变化下保持稳定、输出有界值,并能有效关联生物物理信号。尽管如此,它们通常被视为固定预处理步骤,其系数被设定为1,这限制了其对特定学习任务的适应能力。本研究提出归一化差分层——一种可微分神经网络模块。该方法保留了经典思想,但从数据中学习波段系数。我们提出了完整的数学框架,通过Softplus重参数化确保系数为正且分母有界,从而将该层集成到深度学习架构中。我们描述了前向与反向传播算法,支持通过反向传播进行端到端训练。该方法保留了归一化差分的核心优势,即光照不变性及输出值限制在$[-1,1]$区间,同时允许梯度下降发现任务特定的波段权重。我们将该方法扩展至支持带符号输入,使该层可堆叠于更大型架构内部。实验表明,采用该层的模型在达到与标准多层感知器相近分类精度的同时,参数量减少约75%。在乘性噪声条件下,该层模型也表现优异:当噪声水平为10%时,其精度仅下降0.17%,而基线多层感知器下降3.03%。学习到的系数模式在不同网络深度下保持一致性。