Reducing operation and maintenance costs is a key objective for advanced reactors in general and microreactors in particular. To achieve this reduction, developing robust autonomous control algorithms is essential to ensure safe and autonomous reactor operation. Recently, artificial intelligence and machine learning algorithms, specifically reinforcement learning (RL) algorithms, have seen rapid increased application to control problems, such as plasma control in fusion tokamaks and building energy management. In this work, we introduce the use of RL for intelligent control in nuclear microreactors. The RL agent is trained using proximal policy optimization (PPO) and advantage actor-critic (A2C), cutting-edge deep RL techniques, based on a high-fidelity simulation of a microreactor design inspired by the Westinghouse eVinci\textsuperscript{TM} design. We utilized a Serpent model to generate data on drum positions, core criticality, and core power distribution for training a feedforward neural network surrogate model. This surrogate model was then used to guide a PPO and A2C control policies in determining the optimal drum position across various reactor burnup states, ensuring critical core conditions and symmetrical power distribution across all six core portions. The results demonstrate the excellent performance of PPO in identifying optimal drum positions, achieving a hextant power tilt ratio of approximately 1.002 (within the limit of $<$ 1.02) and maintaining criticality within a 10 pcm range. A2C did not provide as competitive of a performance as PPO in terms of performance metrics for all burnup steps considered in the cycle. Additionally, the results highlight the capability of well-trained RL control policies to quickly identify control actions, suggesting a promising approach for enabling real-time autonomous control through digital twins.
翻译:降低运行与维护成本是先进反应堆(尤其是微反应堆)的关键目标。为实现这一目标,开发鲁棒的自主动控制算法对于确保反应堆安全自主运行至关重要。近年来,人工智能与机器学习算法,特别是强化学习算法,在控制问题中的应用迅速增长,例如聚变托卡马克中的等离子体控制与建筑能源管理。本研究首次将强化学习引入核微反应器的智能控制领域。基于受西屋电气eVinci\textsuperscript{TM}设计启发的微反应堆高保真仿真模型,我们采用近端策略优化与优势演员-评论家这两种前沿深度强化学习技术对智能体进行训练。通过Serpent模型生成鼓位位置、堆芯临界状态及功率分布数据,用于训练前馈神经网络代理模型。该代理模型随后引导PPO与A2C控制策略,在不同燃耗状态下确定最优鼓位位置,确保堆芯达到临界条件且六个分区功率分布对称。结果表明:PPO在寻优鼓位方面表现卓越,六分区功率倾斜比约为1.002(满足<1.02限值),临界状态维持在10 pcm范围内;在本循环考虑的所有燃耗步长下,A2C在性能指标上均未展现出与PPO相当的竞争力。此外,研究结果突显了训练有素的强化学习控制策略能快速确定控制动作,这为通过数字孪生实现实时自主控制提供了可行路径。