Deep reinforcement learning (DRL) has driven major advances in autonomous control. Still, standard Deep Q-Network (DQN) agents tend to rely on fixed learning rates and uniform update scaling, even as updates are modulated by temporal-difference (TD) error. This rigidity destabilizes convergence, especially in sparse-reward settings where feedback is infrequent. We introduce Deep Intrinsic Surprise-Regularized Control (DISRC), a biologically inspired augmentation to DQN that dynamically scales Q-updates based on latent-space surprise. DISRC encodes states via a LayerNorm-based encoder and computes a deviation-based surprise score relative to a moving latent setpoint. Each update is then scaled in proportion to both TD error and surprise intensity, promoting plasticity during early exploration and stability as familiarity increases. We evaluate DISRC on two sparse-reward MiniGrid environments, which included MiniGrid-DoorKey-8x8 and MiniGrid-LavaCrossingS9N1, under identical settings as a vanilla DQN baseline. In DoorKey, DISRC reached the first successful episode (reward > 0.8) 33% faster than the vanilla DQN baseline (79 vs. 118 episodes), with lower reward standard deviation (0.25 vs. 0.34) and higher reward area under the curve (AUC: 596.42 vs. 534.90). These metrics reflect faster, more consistent learning - critical for sparse, delayed reward settings. In LavaCrossing, DISRC achieved a higher final reward (0.95 vs. 0.93) and the highest AUC of all agents (957.04), though it converged more gradually. These preliminary results establish DISRC as a novel mechanism for regulating learning intensity in off-policy agents, improving both efficiency and stability in sparse-reward domains. By treating surprise as an intrinsic learning signal, DISRC enables agents to modulate updates based on expectation violations, enhancing decision quality when conventional value-based methods fall short.
翻译:深度强化学习(DRL)推动了自主控制领域的重大进展。然而,标准的深度Q网络(DQN)智能体往往依赖于固定的学习率和统一的更新缩放比例,即使更新是通过时序差分(TD)误差进行调节的。这种僵化特性会破坏收敛稳定性,尤其是在反馈稀疏的稀疏奖励环境中。我们提出了深度内在惊奇正则化控制(DISRC),这是一种对DQN的生物启发式增强方法,它基于潜在空间惊奇动态缩放Q更新。DISRC通过一个基于LayerNorm的编码器对状态进行编码,并计算相对于移动潜在设定点的基于偏差的惊奇分数。随后,每次更新根据TD误差和惊奇强度按比例缩放,从而在早期探索阶段促进可塑性,并在熟悉度增加时提升稳定性。我们在两个稀疏奖励的MiniGrid环境(包括MiniGrid-DoorKey-8x8和MiniGrid-LavaCrossingS9N1)中,在与标准DQN基线完全相同的设置下评估了DISRC。在DoorKey环境中,DISRC达到首次成功回合(奖励 > 0.8)的速度比标准DQN基线快33%(79回合 vs. 118回合),同时具有更低的奖励标准差(0.25 vs. 0.34)和更高的曲线下面积(AUC:596.42 vs. 534.90)。这些指标反映了更快、更稳定的学习过程——这对于稀疏、延迟奖励的环境至关重要。在LavaCrossing环境中,DISRC获得了更高的最终奖励(0.95 vs. 0.93)以及所有智能体中最高的AUC(957.04),尽管其收敛过程更为渐进。这些初步结果表明,DISRC是一种新颖的机制,可用于调节离策略智能体的学习强度,在稀疏奖励领域中同时提升效率和稳定性。通过将惊奇视为一种内在学习信号,DISRC使智能体能够基于期望违背来调节更新,从而在传统基于价值的方法不足时提升决策质量。