Decompilation aims to recover the source code form of a binary executable. It has many security applications, such as malware analysis, vulnerability detection, and code hardening. A prominent challenge in decompilation is to recover variable names. We propose a novel technique that leverages the strengths of generative models while mitigating model biases. We build a prototype, GenNm, from pre-trained generative models CodeGemma-2B, CodeLlama-7B, and CodeLlama-34B. We finetune GenNm on decompiled functions and teach models to leverage contextual information. GenNm includes names from callers and callees while querying a function, providing rich contextual information within the model's input token limitation. We mitigate model biases by aligning the output distribution of models with symbol preferences of developers. Our results show that GenNm improves the state-of-the-art name recovery precision by 5.6-11.4 percentage points on two commonly used datasets and improves the state-of-the-art by 32% (from 17.3% to 22.8%) in the most challenging setup where ground-truth variable names are not seen in the training dataset.
翻译:反编译旨在恢复二进制可执行文件的源代码形式。它在恶意软件分析、漏洞检测和代码加固等安全领域具有广泛应用。反编译中的一个突出挑战是恢复变量名。我们提出了一种新技术,该技术利用生成模型的优势,同时减轻模型偏差。我们基于预训练的生成模型CodeGemma-2B、CodeLlama-7B和CodeLlama-34B构建了一个原型系统GenNm。我们在反编译得到的函数上对GenNm进行微调,并教导模型利用上下文信息。GenNm在查询函数时,会包含来自调用者和被调用者的名称,从而在模型输入令牌的限制内提供丰富的上下文信息。我们通过将模型的输出分布与开发者的符号偏好对齐来减轻模型偏差。我们的结果表明,在两个常用数据集上,GenNm将当前最先进的名称恢复精度提高了5.6-11.4个百分点;在最具挑战性的设置下(即训练数据集中未出现真实变量名),GenNm将当前最先进水平提高了32%(从17.3%提升至22.8%)。