Neural networks are increasingly used as surrogate solvers and control policies, but unconstrained predictions can violate physical, operational, or safety requirements. We propose SnareNet, a feasibility-controlled architecture for learning mappings whose outputs must satisfy input-dependent nonlinear constraints. SnareNet appends a differentiable repair layer that navigates in the constraint map's range space, steering iterates toward feasibility and producing a repaired output that satisfies constraints to a user-specified tolerance. To stabilize end-to-end training, we introduce adaptive relaxation, which designs a relaxed feasible set that snares the neural network at initialization and shrinks it into the feasible set, enabling early exploration and strict feasibility later in training. On optimization-learning and trajectory planning benchmarks, SnareNet consistently attains improved objective quality while satisfying constraints more reliably than prior work.
翻译:神经网络日益被用作替代求解器和控制策略,但无约束的预测可能违反物理、操作或安全要求。我们提出SnareNet,一种用于学习其输出必须满足输入相关非线性约束的映射的可行性控制架构。SnareNet附加了一个可微修复层,该层在约束映射的值域空间中导航,引导迭代解朝向可行性,并生成满足用户指定容差约束的修复输出。为稳定端到端训练,我们引入自适应松弛技术,其设计了一个松弛可行集,在初始化阶段捕获神经网络,并在训练后期将其收缩至可行集内,从而实现早期探索和后期严格可行性。在优化学习和轨迹规划基准测试中,SnareNet持续获得更优的目标质量,同时比先前工作更可靠地满足约束条件。