Optimal decision-making presents a significant challenge for autonomous systems operating in uncertain, stochastic and time-varying environments. Environmental variability over time can significantly impact the system's optimal decision making strategy for mission completion. To model such environments, our work combines the previous notion of Time-Varying Markov Decision Processes (TVMDP) with partial observability and introduces Time-Varying Partially Observable Markov Decision Processes (TV-POMDP). We propose a two-pronged approach to accurately estimate and plan within the TV-POMDP: 1) Memory Prioritized State Estimation (MPSE), which leverages weighted memory to provide more accurate time-varying transition estimates; and 2) an MPSE-integrated planning strategy that optimizes long-term rewards while accounting for temporal constraint. We validate the proposed framework and algorithms using simulations and hardware, with robots exploring a partially observable, time-varying environments. Our results demonstrate superior performance over standard methods, highlighting the framework's effectiveness in stochastic, uncertain, time-varying domains.
翻译:在不确定、随机且时变的环境中运行,自主系统的最优决策制定面临重大挑战。环境随时间的变化会显著影响系统完成任务时的最优决策策略。为建模此类环境,本文结合了先前的时间变化马尔可夫决策过程(TVMDP)概念与部分可观测性,引入时间变化部分可观测马尔可夫决策过程(TV-POMDP)。我们提出了一种双管齐下的方法,以在TV-POMDP中实现精确估计与规划:1)记忆优先状态估计(MPSE),通过加权记忆提供更准确的时间变化转移估计;2)集成MPSE的规划策略,在考虑时间约束的同时优化长期奖励。我们通过仿真和硬件实验验证了所提出的框架与算法,机器人探索部分可观测且时变环境。结果表明,该方法在随机、不确定、时变领域中优于标准方法,凸显了该框架的有效性。