Reinforcement learning (RL) has shown considerable potential in autonomous driving (AD), yet its vulnerability to perturbations remains a critical barrier to real-world deployment. As a primary countermeasure, adversarial training improves policy robustness by training the AD agent in the presence of an adversary that deliberately introduces perturbations. Existing approaches typically model the interaction as a zero-sum game with continuous attacks. However, such designs overlook the inherent asymmetry between the agent and the adversary and then fail to reflect the sparsity of safety-critical risks, rendering the achieved robustness inadequate for practical AD scenarios. To address these limitations, we introduce criticality-aware robust RL (CARRL), a novel adversarial training approach for handling sparse, safety-critical risks in autonomous driving. CARRL consists of two interacting components: a risk exposure adversary (REA) and a risk-targeted robust agent (RTRA). We model the interaction between the REA and RTRA as a general-sum game, allowing the REA to focus on exposing safety-critical failures (e.g., collisions) while the RTRA learns to balance safety with driving efficiency. The REA employs a decoupled optimization mechanism to better identify and exploit sparse safety-critical moments under a constrained budget. However, such focused attacks inevitably result in a scarcity of adversarial data. The RTRA copes with this scarcity by jointly leveraging benign and adversarial experiences via a dual replay buffer and enforces policy consistency under perturbations to stabilize behavior. Experimental results demonstrate that our approach reduces the collision rate by at least 22.66\% across all cases compared to state-of-the-art baseline methods.
翻译:强化学习(RL)在自动驾驶(AD)领域展现出巨大潜力,但其对扰动的脆弱性仍是实际部署的关键障碍。作为主要对策,对抗训练通过在存在故意引入扰动的对手情况下训练AD智能体,以提升策略鲁棒性。现有方法通常将交互建模为具有连续攻击的零和博弈。然而,此类设计忽略了智能体与对手之间固有的不对称性,且未能反映安全关键风险的稀疏性,导致所实现的鲁棒性不足以应对实际AD场景。为克服这些局限,我们提出临界感知鲁棒强化学习(CARRL),一种用于处理自动驾驶中稀疏安全关键风险的新型对抗训练方法。CARRL包含两个交互组件:风险暴露对手(REA)与风险导向鲁棒智能体(RTRA)。我们将REA与RTRA间的交互建模为一般和博弈,使REA专注于暴露安全关键故障(如碰撞),而RTRA则学习在安全与驾驶效率间取得平衡。REA采用解耦优化机制,以在受限预算下更好地识别并利用稀疏的安全关键时刻。然而,此类聚焦攻击不可避免地导致对抗数据稀缺。RTRA通过双经验回放池联合利用良性经验与对抗经验来应对数据稀缺问题,并在扰动下强制策略一致性以稳定行为。实验结果表明,相较于最先进的基线方法,我们的方法在所有案例中将碰撞率降低了至少22.66%。