Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks. In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel. Unlike application-level software, a systems codebase like Linux is multilingual (low-level C/Assembly/Bash/Rust); gigantic (>20 million lines); critical (impacting billions of devices worldwide), and highly concurrent (involving complex multi-threading). To evaluate if ML models are useful while developing such large-scale systems-level software, we introduce kGym (a platform) and kBench (a dataset). The kGym platform provides a SE environment for large-scale experiments on the Linux kernel, including compiling and running kernels in parallel across several virtual machines, detecting operations and crashes, inspecting logs, and querying and patching the code base. We use kGym to facilitate evaluation on kBench, a crash resolution benchmark drawn from real-world Linux kernel bugs. An example bug in kBench contains crashing stack traces, a bug-reproducer file, a developer-written fix, and other associated data. To understand current performance, we conduct baseline experiments by prompting LLMs to resolve Linux kernel crashes. Our initial evaluations reveal that the best performing LLM achieves 0.72% and 5.38% in the unassisted and assisted (i.e., buggy files disclosed to the model) settings, respectively. These results highlight the need for further research to enhance model performance in SE tasks. Improving performance on kBench requires models to master new learning skills, including understanding the cause of crashes and repairing faults, writing memory-safe and hardware-aware code, and understanding concurrency. As a result, this work opens up multiple avenues of research at the intersection of machine learning and systems software.
翻译:大语言模型(LLMs)在日益真实的软件工程任务中持续取得进步。在现实世界的软件栈中,大量软件工程工作投入于开发诸如Linux内核这样的基础系统软件。与应用层软件不同,像Linux这样的系统代码库具有多语言特性(涉及底层C/汇编/Bash/Rust)、规模庞大(超过2000万行)、至关重要(影响全球数十亿设备)以及高度并发(涉及复杂的多线程操作)的特点。为了评估机器学习模型在开发此类大规模系统级软件时的实用性,我们引入了kGym(一个平台)和kBench(一个数据集)。kGym平台为Linux内核的大规模实验提供了一个软件工程环境,包括在多个虚拟机中并行编译和运行内核、检测操作与崩溃、检查日志、查询和修补代码库。我们利用kGym来支持对kBench的评估,kBench是一个源自真实世界Linux内核漏洞的崩溃修复基准测试。kBench中的每个漏洞示例包含崩溃堆栈跟踪、漏洞复现文件、开发者编写的修复补丁以及其他相关数据。为了解当前性能水平,我们通过提示LLMs修复Linux内核崩溃进行了基线实验。初步评估显示,表现最佳的LLM在无辅助(即模型不知悉问题文件)和辅助(即向模型披露问题文件)设置下的修复成功率分别为0.72%和5.38%。这些结果凸显了需要进一步研究以提升模型在软件工程任务中的性能。要在kBench上取得进步,模型需要掌握新的学习技能,包括理解崩溃成因与修复缺陷、编写内存安全且硬件感知的代码,以及理解并发机制。因此,这项工作为机器学习与系统软件的交叉研究开辟了多个方向。