Neural CDEs provide a natural way to process the temporal evolution of irregular time series. The number of function evaluations (NFE) is these systems' natural analog of depth (the number of layers in traditional neural networks). It is usually regulated via solver error tolerance: lower tolerance means higher numerical precision, requiring more integration steps. However, lowering tolerances does not adequately increase the models' expressiveness. We propose a simple yet effective alternative: scaling the integration time horizon to increase NFEs and "deepen`` the model. Increasing the integration interval causes uncontrollable growth in conventional vector fields, so we also propose a way to stabilize the dynamics via Negative Feedback (NF). It ensures provable stability without constraining flexibility. It also implies robustness: we provide theoretical bounds for Neural ODE risk using Gaussian process theory. Experiments on four open datasets demonstrate that our method, DeNOTS, outperforms existing approaches~ -- ~including recent Neural RDEs and state space models,~ -- ~achieving up to $20\%$ improvement in metrics. DeNOTS combines expressiveness, stability, and robustness, enabling reliable modelling in continuous-time domains.
翻译:神经控制微分方程为处理不规则时间序列的时序演化提供了一种自然方式。函数评估次数是这类系统中深度的自然类比(相当于传统神经网络中的层数)。通常通过求解器误差容限来调节:更低的容限意味着更高的数值精度,需要更多积分步数。然而,降低容限并不能有效提升模型的表达能力。我们提出一种简单而有效的替代方案:通过扩展积分时间跨度来增加函数评估次数,从而"加深"模型。增加积分区间会导致传统向量场出现不可控增长,因此我们还提出一种通过负反馈机制来稳定动力学行为的方法。该方法在保持灵活性的同时确保了可证明的稳定性。其还意味着鲁棒性:我们利用高斯过程理论给出了神经常微分方程风险的理论界。在四个公开数据集上的实验表明,我们的方法DeNOTS优于现有方法——包括最近的神经粗糙微分方程和状态空间模型——在评估指标上最高可获得$20\%$的提升。DeNOTS融合了表达能力、稳定性和鲁棒性,为连续时间领域的可靠建模提供了可能。