Domain Specific Languages (DSLs) increase programmer productivity and provide high performance. Their targeted abstractions allow scientists to express problems at a high level, providing rich details that optimizing compilers can exploit to target current- and next-generation supercomputers. The convenience and performance of DSLs come with significant development and maintenance costs. The siloed design of DSL compilers and the resulting inability to benefit from shared infrastructure cause uncertainties around longevity and the adoption of DSLs at scale. By tailoring the broadly-adopted MLIR compiler framework to HPC, we bring the same synergies that the machine learning community already exploits across their DSLs (e.g. Tensorflow, PyTorch) to the finite-difference stencil HPC community. We introduce new HPC-specific abstractions for message passing targeting distributed stencil computations. We demonstrate the sharing of common components across three distinct HPC stencil-DSL compilers: Devito, PSyclone, and the Open Earth Compiler, showing that our framework generates high-performance executables based upon a shared compiler ecosystem.
翻译:领域专用语言(DSL)能够提升程序员生产力并提供高性能。其目标明确的抽象允许科学家以高层级表达问题,同时提供丰富的细节信息,使优化编译器能够利用这些信息面向当前及下一代超级计算机进行编译。DSL的便利性与高性能伴随着显著的开发与维护成本。DSL编译器各自为政的设计模式,以及由此导致的无法从共享基础设施中获益的问题,给DSL的长期存续和大规模应用带来了不确定性。通过将广泛采用的MLIR编译器框架适配至高性能计算领域,我们将机器学习社区已在其DSL(如Tensorflow、PyTorch)中成功利用的协同效应,引入到有限差分模板计算的高性能计算社区。我们针对分布式模板计算,引入了新的、面向消息传递的高性能计算专用抽象。我们在三个不同的高性能计算模板DSL编译器——Devito、PSyclone和Open Earth Compiler中,展示了通用组件的共享,并证明我们的框架能够基于共享的编译器生态系统生成高性能可执行文件。