Binary code analysis plays an essential role in cybersecurity, facilitating reverse engineering to reveal the inner workings of programs in the absence of source code. Traditional approaches, such as static and dynamic analysis, extract valuable insights from stripped binaries, but often demand substantial expertise and manual effort. Recent advances in deep learning have opened promising opportunities to enhance binary analysis by capturing latent features and disclosing underlying code semantics. Despite the growing number of binary analysis models based on machine learning, their robustness to adversarial code transformations at the binary level remains underexplored. We evaluate the robustness of deep learning models for the task of binary code similarity detection (BCSD) under semantics-preserving transformations. The unique nature of machine instructions presents distinct challenges compared to the typical input perturbations found in other domains. We introduce asmFooler, a system that evaluates the resilience of BCSD models using a diverse set of adversarial code transformations that preserve functional semantics. We construct a dataset of 9,565 binary variants from 620 baseline samples by applying eight semantics-preserving transformations across six representative BCSD models. Our major findings highlight several key insights: i) model robustness relies on the processing pipeline, including code pre-processing, architecture, and feature selection; ii) adversarial transformation effectiveness is bounded by a budget shaped by model-specific constraints like input size and instruction expressive capacity; iii) well-crafted transformations can be highly effective with minimal perturbations; and iv) such transformations efficiently disrupt model decisions (e.g., misleading to false positives or false negatives) by focusing on semantically significant instructions.
翻译:二进制代码分析在网络安全领域扮演着关键角色,通过逆向工程揭示程序在缺乏源代码情况下的内部工作机制。传统方法(如静态与动态分析)能够从剥离的二进制文件中提取有价值的信息,但通常需要大量专业知识和人工投入。深度学习的最新进展为增强二进制分析提供了新机遇,通过捕捉潜在特征并揭示底层代码语义。尽管基于机器学习的二进制分析模型日益增多,但它们在二进制层面面对对抗性代码变换的鲁棒性仍未得到充分探索。本研究评估了深度学习模型在语义保持变换下执行二进制代码相似性检测任务的鲁棒性。与其它领域中常见的输入扰动相比,机器指令的独特性质带来了截然不同的挑战。我们提出了asmFooler系统,该系统通过多种保持功能语义的对抗性代码变换来评估BCSD模型的抗干扰能力。我们通过对620个基准样本应用八种语义保持变换,构建了包含9,565个二进制变体的数据集,并在六个代表性BCSD模型上进行测试。主要研究发现揭示了以下关键见解:i)模型鲁棒性依赖于处理流程,包括代码预处理、架构设计和特征选择;ii)对抗变换的有效性受限于由模型特定约束(如输入尺寸和指令表达能力)形成的预算边界;iii)精心设计的变换能够以最小扰动实现高度有效性;iv)此类变换通过聚焦于语义关键指令,可高效破坏模型决策(例如误导至误报或漏报)。