Deep Reinforcement Learning (DRL) presents a promising avenue for optimizing Energy Storage Systems (ESSs) dispatch in distribution networks. This paper introduces RL-ADN, an innovative open-source library specifically designed for solving the optimal ESSs dispatch in active distribution networks. RL-ADN offers unparalleled flexibility in modeling distribution networks, and ESSs, accommodating a wide range of research goals. A standout feature of RL-ADN is its data augmentation module, based on Gaussian Mixture Model and Copula (GMC) functions, which elevates the performance ceiling of DRL agents. Additionally, RL-ADN incorporates the Laurent power flow solver, significantly reducing the computational burden of power flow calculations during training without sacrificing accuracy. The effectiveness of RL-ADN is demonstrated using in different sizes of distribution networks, showing marked performance improvements in the adaptability of DRL algorithms for ESS dispatch tasks. This enhancement is particularly beneficial from the increased diversity of training scenarios. Furthermore, RL-ADN achieves a tenfold increase in computational efficiency during training, making it highly suitable for large-scale network applications. The library sets a new benchmark in DRL-based ESSs dispatch in distribution networks and it is poised to advance DRL applications in distribution network operations significantly. RL-ADN is available at: https://github.com/ShengrenHou/RL-ADN.
翻译:深度强化学习为优化配电网中储能系统的调度提供了一条前景广阔的途径。本文介绍了RL-ADN,这是一个专门为解决有源配电网中储能系统最优调度问题而设计的创新型开源库。RL-ADN在建模配电网和储能系统方面提供了无与伦比的灵活性,能够适应广泛的研究目标。RL-ADN的一个突出特点是其基于高斯混合模型和Copula函数的数据增强模块,该模块提升了深度强化学习智能体的性能上限。此外,RL-ADN集成了Laurent潮流求解器,在不牺牲精度的前提下,显著降低了训练期间潮流计算的计算负担。通过在不同规模的配电网中进行测试,验证了RL-ADN的有效性,结果表明深度强化学习算法在储能调度任务中的适应性获得了显著性能提升。这种提升尤其得益于训练场景多样性的增加。此外,RL-ADN在训练期间实现了十倍的计算效率提升,使其非常适用于大规模网络应用。该库为基于深度强化学习的配电网储能系统调度设立了新的基准,并有望显著推动深度强化学习在配电网运行中的应用。RL-ADN可通过以下网址获取:https://github.com/ShengrenHou/RL-ADN。