A major goal of computational astrophysics is to simulate the Milky Way Galaxy with sufficient resolution down to individual stars. However, the scaling fails due to some small-scale, short-timescale phenomena, such as supernova explosions. We have developed a novel integration scheme of $N$-body/hydrodynamics simulations working with machine learning. This approach bypasses the short timesteps caused by supernova explosions using a surrogate model, thereby improving scalability. With this method, we reached 300 billion particles using 148,900 nodes, equivalent to 7,147,200 CPU cores, breaking through the billion-particle barrier currently faced by state-of-the-art simulations. This resolution allows us to perform the first star-by-star galaxy simulation, which resolves individual stars in the Milky Way Galaxy. The performance scales over $10^4$ CPU cores, an upper limit in the current state-of-the-art simulations using both A64FX and X86-64 processors and NVIDIA CUDA GPUs.
翻译:计算天体物理学的一个主要目标是以足够高的分辨率模拟银河系,直至单个恒星级别。然而,由于超新星爆发等小尺度、短时标现象的存在,模拟的扩展性受到限制。我们开发了一种结合机器学习的新型N体/流体动力学模拟集成方案。该方法通过使用代理模型绕过了超新星爆发导致的短时间步长限制,从而提升了可扩展性。利用此方法,我们在148,900个节点(相当于7,147,200个CPU核心)上实现了3000亿粒子模拟,突破了当前最先进模拟面临的十亿粒子壁垒。这一分辨率使我们得以执行首个逐星级别的星系模拟,能够解析银河系中的单个恒星。该模拟性能在超过10^4个CPU核心上实现了良好扩展,这是当前使用A64FX和X86-64处理器以及NVIDIA CUDA GPU的最先进模拟的上限。