False data injection attacks (FDIAs) on smart inverters are a growing concern linked to increased renewable energy production. While data-based FDIA detection methods are also actively developed, we show that they remain vulnerable to impactful and stealthy adversarial examples that can be crafted using Reinforcement Learning (RL). We propose to include such adversarial examples in data-based detection training procedure via a continual adversarial RL (CARL) approach. This way, one can pinpoint the deficiencies of data-based detection, thereby offering explainability during their incremental improvement. We show that a continual learning implementation is subject to catastrophic forgetting, and additionally show that forgetting can be addressed by employing a joint training strategy on all generated FDIA scenarios.
翻译:智能逆变器上的虚假数据注入攻击(FDIAs)已成为与可再生能源产量增长相关的重要隐患。尽管基于数据的FDIA检测方法也在积极发展中,但我们证明这些方法仍易受通过强化学习(RL)构造的高影响力且隐蔽的对抗样本攻击。我们提出通过持续对抗强化学习(CARL)方法,将此类对抗样本纳入基于数据的检测训练流程。通过这种方式,可以精确定位基于数据检测的缺陷,从而在其增量改进过程中提供可解释性。我们证明持续学习实现会遭受灾难性遗忘问题,并进一步表明通过在所有生成的FDIA场景上采用联合训练策略可以解决遗忘问题。