OpenACC lowers the barrier to GPU offloading, but writing high-performing pragma remains complex, requiring deep domain expertise in memory hierarchies, data movement, and parallelization strategies. Large Language Models (LLMs) present a promising potential solution for automated parallel code generation, but naive prompting often results in syntactically incorrect directives, uncompilable code, or performance that fails to exceed CPU baselines. We present a systematic prompt optimization approach to enhance OpenACC pragma generation without the prohibitive computational costs associated with model post-training. Leveraging the GEPA (GEnetic-PAreto) framework, we iteratively evolve prompts through a reflective feedback loop. This process utilizes crossover and mutation of instructions, guided by expert-curated gold examples and structured feedback based on clause- and clause parameter-level mismatches between the gold and predicted pragma. In our evaluation on the PolyBench suite, we observe an increase in compilation success rates for programs annotated with OpenACC pragma generated using the optimized prompts compared to those annotated using the simpler initial prompt, particularly for the "nano"-scale models. Specifically, with optimized prompts, the compilation success rate for GPT-4.1 Nano surged from 66.7% to 93.3%, and for GPT-5 Nano improved from 86.7% to 100%, matching or surpassing the capabilities of their significantly larger, more expensive versions. Beyond compilation, the optimized prompts resulted in a 21% increase in the number of programs that achieve functional GPU speedups over CPU baselines. These results demonstrate that prompt optimization effectively unlocks the potential of smaller, cheaper LLMs in writing stable and effective GPU-offloading directives, establishing a cost-effective pathway to automated directive-based parallelization in HPC workflows.
翻译:OpenACC降低了GPU卸载的门槛,但编写高性能的pragma仍然复杂,需要深厚的内存层次结构、数据移动和并行化策略领域专业知识。大型语言模型为自动化并行代码生成提供了一个有前景的潜在解决方案,但简单的提示通常会导致语法错误的指令、无法编译的代码或性能无法超越CPU基线。我们提出了一种系统化的提示优化方法,以增强OpenACC pragma的生成,同时避免了与模型后训练相关的过高计算成本。利用GEPA框架,我们通过一个反思性反馈循环迭代地进化提示。该过程利用指令的交叉和变异,并以专家策划的黄金示例以及基于黄金pragma与预测pragma之间子句级和子句参数级不匹配的结构化反馈为指导。在PolyBench套件的评估中,我们观察到,与使用较简单的初始提示生成的OpenACC pragma注释的程序相比,使用优化提示生成的pragma注释的程序编译成功率有所提高,特别是对于"纳米"级模型。具体而言,使用优化提示后,GPT-4.1 Nano的编译成功率从66.7%飙升至93.3%,GPT-5 Nano则从86.7%提升至100%,达到或超越了其规模显著更大、成本更高的版本的能力。除了编译之外,优化提示使得在功能上实现GPU加速超过CPU基线的程序数量增加了21%。这些结果表明,提示优化有效地释放了更小、更便宜的大型语言模型在编写稳定有效的GPU卸载指令方面的潜力,为HPC工作流中基于指令的自动化并行化建立了一条经济高效的路径。