We present a random measure approach for modeling exploration, i.e., the execution of measure-valued controls, in continuous-time reinforcement learning (RL) with controlled diffusion and jumps. First, we consider the case when sampling the randomized control in continuous time takes place on a discrete-time grid and reformulate the resulting stochastic differential equation (SDE) as an equation driven by suitable random measures. The construction of these random measures makes use of the Brownian motion and the Poisson random measure (which are the sources of noise in the original model dynamics) as well as the additional random variables, which are sampled on the grid for the control execution. Then, we prove a limit theorem for these random measures as the mesh-size of the sampling grid goes to zero, which leads to the grid-sampling limit SDE that is jointly driven by white noise random measures and a Poisson random measure. We also argue that the grid-sampling limit SDE can substitute the exploratory SDE and the sample SDE of the recent continuous-time RL literature, i.e., it can be applied for the theoretical analysis of exploratory control problems and for the derivation of learning algorithms.
翻译:本文提出了一种随机测度方法,用于建模连续时间强化学习(RL)中具有受控扩散和跳跃的探索过程(即测度值控制的执行)。首先,我们考虑在连续时间中对随机控制进行离散时间网格采样的情形,并将所得的随机微分方程(SDE)重新表述为由适当随机测度驱动的方程。这些随机测度的构造利用了布朗运动和泊松随机测度(它们是原始模型动态中的噪声源)以及在控制执行网格上采样的额外随机变量。然后,我们证明了当采样网格的步长趋于零时,这些随机测度的极限定理,从而导出了由白噪声随机测度和泊松随机测度共同驱动的网格采样极限SDE。我们还论证了网格采样极限SDE可以替代近期连续时间RL文献中的探索性SDE和样本SDE,即它可应用于探索性控制问题的理论分析和学习算法的推导。