Greybox fuzzing has achieved success in revealing bugs and vulnerabilities in programs. However, randomized mutation strategies have limited the fuzzer's performance on structured data. Specialized fuzzers can handle complex structured data, but require additional efforts in grammar and suffer from low throughput. In this paper, we explore the potential of utilizing the Large Language Model to enhance greybox fuzzing for structured data. We utilize the pre-trained knowledge of LLM about data conversion and format to generate new valid inputs. We further fine-tuned it with paired mutation seeds to learn structured format and mutation strategies effectively. Our LLM-based fuzzer, LLAMAFUZZ, integrates the power of LLM to understand and mutate structured data to fuzzing. We conduct experiments on the standard bug-based benchmark Magma and a wide variety of real-world programs. LLAMAFUZZ outperforms our top competitor by 41 bugs on average. We also identified 47 unique bugs across all trials. Moreover, LLAMAFUZZ demonstrated consistent performance on both bug trigger and bug reached. Compared to AFL++, LLAMAFUZZ achieved 27.19% more branches in real-world program sets on average. We also demonstrate a case study to explain how LLMs enhance the fuzzing process in terms of code coverage.
翻译:灰盒模糊测试在揭示程序缺陷与漏洞方面已取得成功。然而,随机化变异策略限制了模糊测试器在处理结构化数据时的性能。专用模糊测试器虽能处理复杂的结构化数据,但需要额外的语法构建工作且存在吞吐量低的问题。本文探索利用大语言模型增强结构化数据灰盒模糊测试的潜力。我们利用LLM在数据转换与格式方面的预训练知识生成新的有效输入,并进一步通过配对变异种子对其进行微调,以有效学习结构化格式与变异策略。我们基于LLM的模糊测试器LLAMAFUZZ,集成了LLM理解与变异结构化数据的能力以进行模糊测试。我们在标准缺陷基准测试集Magma及多种现实世界程序上进行了实验。LLAMAFUZZ平均比最优竞品多检测出41个缺陷,并在所有试验中累计识别出47个独特缺陷。此外,LLAMAFUZZ在缺陷触发与缺陷到达方面均表现出稳定性能。与AFL++相比,LLAMAFUZZ在现实世界程序集中平均多覆盖27.19%的分支。我们还通过案例研究阐释了LLM如何在代码覆盖率方面增强模糊测试过程。