We study reinforcement learning (RL) for a class of continuous-time linear-quadratic (LQ) control problems for diffusions where volatility of the state processes depends on both state and control variables. We apply a model-free approach that relies neither on knowledge of model parameters nor on their estimations, and devise an actor-critic algorithm to learn the optimal policy parameter directly. Our main contributions include the introduction of a novel exploration schedule and a regret analysis of the proposed algorithm. We provide the convergence rate of the policy parameter to the optimal one, and prove that the algorithm achieves a regret bound of $O(N^{\frac{3}{4}})$ up to a logarithmic factor. We conduct a simulation study to validate the theoretical results and demonstrate the effectiveness and reliability of the proposed algorithm. We also perform numerical comparisons between our method and those of the recent model-based stochastic LQ RL studies adapted to the state- and control-dependent volatility setting, demonstrating a better performance of the former in terms of regret bounds.
翻译:本文研究一类扩散过程的连续时间线性二次控制问题中的强化学习,其中状态过程的波动率同时依赖于状态变量和控制变量。我们采用一种免模型方法,该方法既不依赖于模型参数的先验知识,也不依赖于参数估计,并设计了一种演员-评论家算法来直接学习最优策略参数。我们的主要贡献包括引入一种新颖的探索调度机制以及对所提算法进行遗憾分析。我们给出了策略参数向最优参数收敛的速率,并证明该算法在忽略对数因子后能达到$O(N^{\frac{3}{4}})$的遗憾界。我们通过仿真研究验证了理论结果,并证明了所提算法的有效性和可靠性。我们还将本方法与近期适用于状态与控制依赖波动率设置的基于模型的随机LQ强化学习方法进行了数值比较,结果表明前者在遗憾界方面具有更优的性能。