Integrating hard constraints into deep learning is essential for safety-critical systems. Yet existing constructive layers that project predictions onto constraint boundaries face a fundamental bottleneck: gradient saturation. By collapsing exterior points onto lower-dimensional surfaces, standard orthogonal projections induce rank-deficient Jacobians, which nullify gradients orthogonal to active constraints and hinder optimization. We introduce Soft-Radial Projection, a differentiable reparameterization layer that circumvents this issue through a radial mapping from Euclidean space into the interior of the feasible set. This construction guarantees strict feasibility while preserving a full-rank Jacobian almost everywhere, thereby preventing the optimization stalls typical of boundary-based methods. We theoretically prove that the architecture retains the universal approximation property and empirically show improved convergence behavior and solution quality over state-of-the-art optimization- and projection-based baselines.
翻译:在深度学习模型中集成硬约束对于安全关键系统至关重要。然而,现有的将预测投影到约束边界上的构造性层面临一个根本性瓶颈:梯度饱和问题。标准正交投影通过将外部点坍缩到低维曲面上,导致雅可比矩阵秩亏缺,从而使得垂直于有效约束的梯度消失并阻碍优化过程。本文提出软径向投影,这是一种可微重参数化层,通过从欧几里得空间到可行域内部的径向映射规避了该问题。该构造在保证严格可行性的同时,几乎处处保持满秩雅可比矩阵,从而避免了基于边界的方法中典型的优化停滞现象。我们从理论上证明了该架构保持了通用逼近性质,并通过实验表明其在收敛行为和求解质量上均优于当前最先进的基于优化和投影的基线方法。