Large language models (LLMs) have achieved remarkable progress in code generation, yet their potential for software protection remains largely untapped. Reverse engineering continues to threaten software security, while traditional virtual machine protection (VMP) relies on rigid, rule-based transformations that are costly to design and vulnerable to automated analysis. In this work, we present the first protection-aware framework that learns robust representations of VMP-protected code. Our approach builds large-scale paired datasets of source code and normalized VM implementations, and introduces hierarchical dependency modeling at intra-, preceding-, and inter-instruction levels. We jointly optimize language modeling with functionality-aware and protection-aware contrastive objectives to capture both semantic equivalence and protection strength. To further assess resilience, we propose a protection effectiveness optimization task that quantifies and ranks different VM variants derived from the same source. Coupled with a two-stage continual pre-training and fine-tuning pipeline, our method enables models to generate, compare, and reason over protected code. Extensive experiments show that our framework significantly improves robustness across diverse protection levels, opening a new research direction for learning-based software defense. In this work, we present ShieldedCode, the first protection-aware framework that learns robust representations of VMP-protected code. Our method achieves 26.95% Pass@1 on L0 VM code generation compared to 22.58% for GPT-4o., and improves binary similarity detection Recall@1 by 10% over state of art methods like jTrans.
翻译:大语言模型在代码生成领域已取得显著进展,但其在软件保护方面的潜力尚未得到充分挖掘。逆向工程持续威胁软件安全,而传统虚拟机保护技术依赖于僵化的基于规则的转换方案,其设计成本高昂且易受自动化分析攻击。本研究提出了首个具备保护感知能力的框架,用于学习虚拟机保护代码的鲁棒表示。该方法构建了源代码与规范化虚拟机实现的大规模配对数据集,并在指令内部、前序指令及指令间三个层级引入层次化依赖建模。我们通过联合优化语言建模任务与功能感知、保护感知的对比学习目标,以同时捕捉语义等价性和保护强度。为进一步评估防护韧性,我们提出了保护效能优化任务,用于量化并排序源自同一源代码的不同虚拟机变体。结合两阶段持续预训练与微调流程,我们的方法使模型能够生成、比较并推理受保护代码。大量实验表明,该框架能显著提升模型在不同保护层级下的鲁棒性,为基于学习的软件防御开辟了新的研究方向。本文提出的ShieldedCode作为首个保护感知框架,在L0虚拟机代码生成任务中达到26.95%的Pass@1准确率(GPT-4o为22.58%),并在二进制相似性检测任务中比jTrans等前沿方法将Recall@1指标提升10%。