In many practical applications, decision-making processes must balance the costs of acquiring information with the benefits it provides. Traditional control systems often assume full observability, an unrealistic assumption when observations are expensive. We tackle the challenge of simultaneously learning observation and control strategies in such cost-sensitive environments by introducing the Observation-Constrained Markov Decision Process (OCMDP), where the policy influences the observability of the true state. To manage the complexity arising from the combined observation and control actions, we develop an iterative, model-free deep reinforcement learning algorithm that separates the sensing and control components of the policy. This decomposition enables efficient learning in the expanded action space by focusing on when and what to observe, as well as determining optimal control actions, without requiring knowledge of the environment's dynamics. We validate our approach on a simulated diagnostic task and a realistic healthcare environment using HeartPole. Given both scenarios, the experimental results demonstrate that our model achieves a substantial reduction in observation costs on average, significantly outperforming baseline methods by a notable margin in efficiency.
翻译:在许多实际应用中,决策过程必须在获取信息的成本与其带来的收益之间取得平衡。传统的控制系统通常假设完全可观测性,这在观测成本高昂时是不切实际的假设。我们通过引入观测约束马尔可夫决策过程(OCMDP)来应对此类成本敏感环境中同时学习观测策略与控制策略的挑战,其中策略会影响真实状态的可观测性。为应对由观测与控制动作联合作用产生的复杂性,我们开发了一种迭代的、无模型的深度强化学习算法,该算法将策略的感知与控制组件分离。这种分解通过聚焦于何时观测以及观测什么,并确定最优控制动作,从而在扩展的动作空间中实现高效学习,且无需已知环境动态。我们在模拟诊断任务及使用HeartPole的真实医疗环境中验证了所提方法。两种场景下的实验结果表明,我们的模型平均实现了观测成本的大幅降低,在效率上显著优于基线方法。