Large language models (LLMs) have achieved notable performance in code synthesis; however, data-aware augmentation remains a limiting factor, handled via heuristic design or brute-force approaches. We introduce a performance-aware, closed-loop solution in the NNGPT ecosystem of projects that enables LLMs to autonomously engineer optimal transformations by internalizing empirical performance cues. We fine-tune LLMs with Low-Rank Adaptation on a novel repository of more than 6,000 empirically evaluated PyTorch augmentation functions, each annotated solely by downstream model accuracy. Training uses pairwise performance ordering (better-worse transformations), enabling alignment through empirical feedback without reinforcement learning, reward models, or symbolic objectives. This reduces the need for exhaustive search, achieving up to 600x times fewer evaluated candidates than brute-force discovery while maintaining competitive peak accuracy and shifting generation from random synthesis to task-aligned design. Ablation studies show that structured Chain-of-Thought prompting introduces syntactic noise and degrades performance, whereas direct prompting ensures stable optimization in performance-critical code tasks. Qualitative and quantitative analyses demonstrate that the model internalizes semantic performance cues rather than memorizing syntax. These results show that LLMs can exhibit task-level reasoning through non-textual feedback loops, bypassing explicit symbolic rewards.
翻译:大型语言模型(LLM)在代码合成方面已取得显著性能,但数据感知增强仍是一个限制因素,通常通过启发式设计或暴力搜索方法处理。我们在NNGPT项目生态系统中提出了一种性能感知的闭环解决方案,使LLM能够通过内化经验性能指标来自主设计最优转换方案。我们基于包含6000多个经实证评估的PyTorch增强函数的新型代码库,采用低秩适应方法对LLM进行微调,每个函数仅通过下游模型准确率进行标注。训练采用成对性能排序(优-劣转换对比),通过经验反馈实现对齐,无需强化学习、奖励模型或符号化目标。该方法大幅减少了穷举搜索需求,相比暴力搜索方法评估候选方案数量最多可减少600倍,同时保持具有竞争力的峰值准确率,并将生成模式从随机合成转向任务对齐设计。消融研究表明,结构化思维链提示会引入语法噪声并降低性能,而直接提示能确保性能关键型代码任务的稳定优化。定性与定量分析表明,模型内化了语义性能线索而非单纯记忆语法。这些结果证明,LLM能够通过非文本反馈循环展现任务级推理能力,无需依赖显式的符号化奖励机制。