Deep reinforcement learning agents are often misaligned, as they over-exploit early reward signals. Recently, several symbolic approaches have addressed these challenges by encoding sparse objectives along with aligned plans. However, purely symbolic architectures are complex to scale and difficult to apply to continuous settings. Hence, we propose a hybrid approach, inspired by humans' ability to acquire new skills. We use a two-stage framework that injects symbolic structure into neural-based reinforcement learning agents without sacrificing the expressivity of deep policies. Our method, called Hybrid Hierarchical RL (H^2RL), introduces a logical option-based pretraining strategy to steer the learning policy away from short-term reward loops and toward goal-directed behavior while allowing the final policy to be refined via standard environment interaction. Empirically, we show that this approach consistently improves long-horizon decision-making and yields agents that outperform strong neural, symbolic, and neuro-symbolic baselines.
翻译:深度强化学习智能体常存在行为失准问题,因其过度利用早期奖励信号。近期,若干符号化方法通过编码稀疏目标与对齐规划应对这些挑战。然而,纯符号化架构难以扩展,且难以应用于连续环境。为此,受人类获取新技能能力的启发,我们提出一种混合方法。我们采用两阶段框架,在不牺牲深度策略表达能力的前提下,将符号化结构注入基于神经网络的强化学习智能体。我们的方法称为混合分层强化学习(H^2RL),引入基于逻辑选项的预训练策略,引导学习策略远离短期奖励循环,转向目标导向行为,同时允许最终策略通过标准环境交互进行优化。实验表明,该方法能持续提升长时程决策能力,并产生优于强神经基线、符号基线及神经符号基线的智能体。