Understanding and quantifying chaos from data remains challenging. We present a data-driven method for estimating the largest Lyapunov exponent (LLE) from one-dimensional chaotic time series using machine learning. A predictor is trained to produce out-of-sample, multi-horizon forecasts; the LLE is then inferred from the exponential growth of the geometrically averaged forecast error (GMAE) across the horizon, which serves as a proxy for trajectory divergence. We validate the approach on four canonical 1D maps-logistic, sine, cubic, and Chebyshev-achieving R2pos > 0.99 against reference LLE curves with series as short as M = 450. Among baselines, KNN yields the closest fits (KNN-R comparable; RF larger deviations). By design the estimator targets positive exponents: in periodic/stable regimes it returns values indistinguishable from zero. Noise robustness is assessed by adding zero-mean white measurement noise and summarizing performance versus the average SNR over parameter sweeps: accuracy saturates for SNRm > 30 dB and collapses below 27 dB, a conservative sensor-level benchmark. The method is simple, computationally efficient, and model-agnostic, requiring only stationarity and the presence of a dominant positive exponent. It offers a practical route to LLE estimation in experimental settings where only scalar time-series measurements are available, with extensions to higher-dimensional and irregularly sampled data left for future work.
翻译:从数据中理解和量化混沌现象仍具挑战性。本文提出一种数据驱动方法,利用机器学习从一维混沌时间序列中估计最大李雅普诺夫指数(LLE)。通过训练预测器生成样本外多步预测,随后根据几何平均预测误差(GMAE)随预测步长的指数增长推断LLE,该误差可作为轨迹发散的代理指标。我们在四种经典一维映射(logistic、sine、cubic和Chebyshev)上验证了该方法,针对参考LLE曲线实现了R2pos > 0.99的精度,所需序列长度最短仅需M = 450。在基线方法中,KNN获得了最接近的拟合结果(KNN-R方法效果相当;随机森林方法偏差较大)。该估计器设计针对正指数:在周期/稳定区域返回与零值无显著差异的结果。通过添加零均值白噪声并汇总参数扫描中性能与平均信噪比的关系评估噪声鲁棒性:当SNRm > 30 dB时精度趋于饱和,低于27 dB时精度急剧下降,这为传感器级应用提供了保守的基准。该方法简单、计算高效且与模型无关,仅需满足平稳性条件并存在主导正指数。它为仅能获取标量时间序列的实验场景提供了实用的LLE估计途径,更高维度和非均匀采样数据的扩展将留待未来研究。