The Transformer, a highly expressive architecture for sequence modeling, has recently been adapted to solve sequential decision-making, most notably through the Decision Transformer (DT), which learns policies by conditioning on desired returns. Yet, the adversarial robustness of reinforcement learning methods based on sequence modeling remains largely unexplored. Here we introduce the Conservative Adversarially Robust Decision Transformer (CART), to our knowledge the first framework designed to enhance the robustness of DT in adversarial stochastic games. We formulate the interaction between the protagonist and the adversary at each stage as a stage game, where the payoff is defined as the expected maximum value over subsequent states, thereby explicitly incorporating stochastic state transitions. By conditioning Transformer policies on the NashQ value derived from these stage games, CART generates policy that are simultaneously less exploitable (adversarially robust) and conservative to transition uncertainty. Empirically, CART achieves more accurate minimax value estimation and consistently attains superior worst-case returns across a range of adversarial stochastic games.
翻译:Transformer作为一种高表达能力的序列建模架构,最近已被应用于解决序列决策问题,其中最具代表性的是通过条件化期望回报来学习策略的决策Transformer(DT)。然而,基于序列建模的强化学习方法的对抗鲁棒性在很大程度上仍未得到探索。本文提出了保守对抗鲁棒决策Transformer(CART),据我们所知,这是首个旨在增强DT在对抗性随机博弈中鲁棒性的框架。我们将每个阶段主角与对手的交互建模为一个阶段博弈,其收益定义为后续状态期望最大值,从而显式地纳入随机状态转移。通过将Transformer策略条件化于这些阶段博弈推导出的NashQ值,CART生成的策略同时具有较低可剥削性(对抗鲁棒)和对转移不确定性的保守性。实验表明,CART在多种对抗性随机博弈中实现了更精确的极小极大值估计,并持续获得更优的最坏情况回报。