In the last decade, the free energy principle (FEP) and active inference (AIF) have achieved many successes connecting conceptual models of learning and cognition to mathematical models of perception and action. This effort is driven by a multidisciplinary interest in understanding aspects of self-organizing complex adaptive systems, including elements of agency. Various reinforcement learning (RL) models performing active inference have been proposed and trained on standard RL tasks using deep neural networks. Recent work has focused on improving such agents' performance in complex environments by incorporating the latest machine learning techniques. In this paper, we take an alternative approach. Within the constraints imposed by the FEP and AIF, we attempt to model agents in an interpretable way without deep neural networks by introducing Free Energy Projective Simulation (FEPS). Using internal rewards only, FEPS agents build a representation of their partially observable environments with which they interact. Following AIF, the policy to achieve a given task is derived from this world model by minimizing the expected free energy. Leveraging the interpretability of the model, techniques are introduced to deal with long-term goals and reduce prediction errors caused by erroneous hidden state estimation. We test the FEPS model on two RL environments inspired from behavioral biology: a timed response task and a navigation task in a partially observable grid. Our results show that FEPS agents fully resolve the ambiguity of both environments by appropriately contextualizing their observations based on prediction accuracy only. In addition, they infer optimal policies flexibly for any target observation in the environment.
翻译:过去十年中,自由能原理(FEP)与主动推理(AIF)在将学习与认知的概念模型同感知与行动的数学模型相连接方面取得了诸多成就。这一努力源于跨学科研究对理解自组织复杂适应系统(包括能动性要素)的兴趣。已有多种执行主动推理的强化学习(RL)模型被提出,并利用深度神经网络在标准RL任务上进行了训练。近期研究侧重于通过整合最新的机器学习技术来提升此类智能体在复杂环境中的性能。本文采取了一种不同的路径。在FEP与AIF的约束框架内,我们尝试通过引入自由能投射模拟(FEPS),以无需深度神经网络的可解释方式对智能体进行建模。FEPS智能体仅使用内部奖励,构建其交互的部分可观测环境的表征。遵循AIF框架,实现特定任务的策略通过最小化期望自由能而从这一世界模型中推导得出。借助模型的可解释性,我们引入了处理长期目标及减少因隐状态估计错误所导致预测误差的技术。我们在两个受行为生物学启发的RL环境中测试FEPS模型:定时响应任务与部分可观测网格中的导航任务。结果表明,FEPS智能体仅基于预测准确性对其观测进行适当情境化,从而完全消除了两种环境的模糊性。此外,它们能灵活地为环境中任意目标观测推断出最优策略。