Large Language Models (LLMs) can generate plausible code, but in settings that require exact stdin/stdout behavior they frequently produce programs that compile yet fail tests, and in some cases they introduce security-sensitive patterns. This paper presents SecureCodeRL, a reinforcement learning (RL) pipeline for security-aware code generation that optimizes a combined reward R = αRfunc + \b{eta}Rsec. The key idea is a partial-credit functional reward that assigns intermediate scores for syntactic validity, successful execution, and producing output, reducing reward sparsity that otherwise stalls learning on competitive programming style tasks. I evaluate supervised fine-tuning (SFT) and PPO variants on a small held-out prompt set from APPS+ and observe that PPO with partial credit (using a continued-training variant) improves syntax validity from 45% (SFT) to 60% and achieves the only non-zero test success signal in this pilot evaluation (5% at-least-one-test-pass), while remaining 100% clean under Bandit static analysis. Although Bandit findings were absent in this small evaluation, the security term is integrated into training to discourage insecure shortcuts when they appear.
翻译:大型语言模型(LLM)能够生成看似合理的代码,但在需要精确标准输入/输出行为的场景中,其生成的程序虽可编译却常无法通过测试,有时甚至引入安全敏感模式。本文提出SecureCodeRL——一种用于安全感知代码生成的强化学习(RL)流程,其优化目标为组合奖励 R = αRfunc + βRsec。核心创新在于引入部分信用功能奖励机制,该机制为语法有效性、成功执行及产生输出分配中间分数,从而缓解竞争性编程任务中因奖励稀疏导致的学习停滞问题。通过在APPS+的小型保留提示集上评估监督微调(SFT)与PPO变体,发现采用部分信用机制的PPO(使用持续训练变体)将语法有效性从45%(SFT)提升至60%,并在此试点评估中实现了唯一的非零测试通过信号(至少通过一项测试的比例达5%),同时在Bandit静态分析下保持100%无漏洞状态。尽管本次小规模评估未发现Bandit漏洞,安全奖励项已整合至训练过程中,以在出现不安全捷径时予以抑制。