Generative AI and large language models (LLMs) have shown strong capabilities in code understanding, but their use in cybersecurity, particularly for malware detection and analysis, remains limited. Existing detection systems often fail to generalize to obfuscated or previously unseen threats, underscoring the need for more adaptable and explainable models. To address this challenge, we introduce XGen-Q, a domain-adapted LLM built on the Qwen-Coder architecture and pretrained on a large-scale corpus of over one million malware samples, spanning both source and assembly code. XGen-Q uses a multi-stage prompt strategy combined with retrieval-augmented generation (RAG) to deliver reliable malware identification and detailed forensic reporting, even in the presence of complex code obfuscation. To further enhance generalization, we design a training pipeline that systematically exposes the model to diverse obfuscation patterns. Experimental results show that XGen-Q achieves significantly lower perplexity than competitive baselines and exhibits strong performance on novel malware samples, demonstrating the promise of LLM-based approaches for interpretable and robust malware analysis.
翻译:生成式人工智能与大语言模型(LLM)在代码理解方面展现出强大能力,但它们在网络安全领域(尤其是恶意软件检测与分析)的应用仍较为有限。现有检测系统往往难以泛化至混淆代码或先前未知的威胁,这凸显了对更具适应性与可解释性模型的迫切需求。为应对这一挑战,我们提出了XGen-Q——一个基于Qwen-Coder架构的领域自适应大语言模型,该模型在包含源代码与汇编代码的百万级恶意软件样本库上进行了预训练。XGen-Q采用多阶段提示策略与检索增强生成(RAG)技术相结合的方式,即使在面对复杂代码混淆的情况下,仍能实现可靠的恶意软件识别与详细的取证报告。为进一步增强泛化能力,我们设计了系统化让模型接触多样化混淆模式的训练流程。实验结果表明,XGen-Q在困惑度指标上显著优于基线模型,并在新型恶意软件样本上表现出强劲性能,这证明了基于LLM的方法在可解释且鲁棒的恶意软件分析领域具有广阔前景。