Optimizing static risk-averse objectives in Markov decision processes is difficult because they do not admit standard dynamic programming equations common in Reinforcement Learning (RL) algorithms. Dynamic programming decompositions that augment the state space with discrete risk levels have recently gained popularity in the RL community. Prior work has shown that these decompositions are optimal when the risk level is discretized sufficiently. However, we show that these popular decompositions for Conditional-Value-at-Risk (CVaR) and Entropic-Value-at-Risk (EVaR) are inherently suboptimal regardless of the discretization level. In particular, we show that a saddle point property assumed to hold in prior literature may be violated. However, a decomposition does hold for Value-at-Risk and our proof demonstrates how this risk measure differs from CVaR and EVaR. Our findings are significant because risk-averse algorithms are used in high-stake environments, making their correctness much more critical.
翻译:在马尔可夫决策过程中优化静态风险厌恶目标具有难度,因其不满足强化学习算法中常见的标准动态规划方程。近年来,通过离散风险水平增强状态空间的动态规划分解在强化学习领域广受关注。先前研究表明,当风险水平充分离散化时,这些分解方法具有最优性。然而,我们证明针对条件风险价值和熵风险价值的这类主流分解方法,无论离散化程度如何,本质上均非最优。特别地,我们指出现有文献中假设成立的鞍点性质可能被违反。不过,价值风险确实存在有效分解,我们通过证明揭示了该风险测度与条件风险价值及熵风险价值的本质差异。由于风险规避算法被广泛应用于高风险场景,其正确性至关重要,因此本研究具有重大意义。