This paper develops a model-based framework for continuous-time policy evaluation (CTPE) in reinforcement learning, incorporating both Brownian and L\'evy noise to model stochastic dynamics influenced by rare and extreme events. Our approach formulates the policy evaluation problem as solving a partial integro-differential equation (PIDE) for the value function with unknown coefficients. A key challenge in this setting is accurately recovering the unknown coefficients in the stochastic dynamics, particularly when driven by L\'evy processes with heavy tail effects. To address this, we propose a robust numerical approach that effectively handles both unbiased and censored trajectory datasets. This method combines maximum likelihood estimation with an iterative tail correction mechanism, improving the stability and accuracy of coefficient recovery. Additionally, we establish a theoretical bound for the policy evaluation error based on coefficient recovery error. Through numerical experiments, we demonstrate the effectiveness and robustness of our method in recovering heavy-tailed L\'evy dynamics and verify the theoretical error analysis in policy evaluation.
翻译:本文提出了一种基于模型的连续时间策略评估(CTPE)框架,用于强化学习,该框架结合了布朗噪声和Lévy噪声,以建模受罕见和极端事件影响的随机动态。我们的方法将策略评估问题表述为求解具有未知系数的价值函数的偏积分-微分方程(PIDE)。在此设置中的一个关键挑战是准确恢复随机动态中的未知系数,特别是在由具有重尾效应的Lévy过程驱动时。为了解决这个问题,我们提出了一种鲁棒的数值方法,能够有效处理无偏和删失的轨迹数据集。该方法将最大似然估计与迭代尾部校正机制相结合,提高了系数恢复的稳定性和准确性。此外,我们基于系数恢复误差建立了策略评估误差的理论界。通过数值实验,我们证明了我们的方法在恢复重尾Lévy动态方面的有效性和鲁棒性,并验证了策略评估中的理论误差分析。